As a user of relativenumber
, I get a bit more mileage out of j
and k
than many Vim users. It enables me to quickly see how far away I am from a line, making jumps like 28j
easy and practical. However, relativenumber
works with lines delimited by newline characters, not the lines you see on the screen. And if a line wraps, j
and k
will move by text lines, not visual lines, causing awkward jumps. Because of this, may users have this in their .vimrcs:
nnoremap j gj
nnoremap k gk
But, since j
and k
no longer act on text lines, 23j
may no longer go the the line marked 23
in the gutter. If there is a wrapped line between the cursor and the target, it will actually take multiple presses of j
to pass that line, meaning that the cursor's end position will be too high. A good way to get around this would be to have single presses of j
and k
act as gj
and gk
while j
and k
with a count would act normally.
The other issue that bothered me was that large j
and k
jumps didn't get added to the jumplist, meaning that there was no easy way to undo and redo them. That can be fixed by automatically setting the '
mark if j
or k
is executed with a count.
So, without further ado, this is what I've come up with:
nnoremap <silent> k :<C-U>execute 'normal!' (v:count > 1 ? "m'" . v:count : 'g') . 'k'<CR>
nnoremap <silent> j :<C-U>execute 'normal!' (v:count > 1 ? "m'" . v:count : 'g') . 'j'<CR>
This does the following:
- For an n
j
command, m'nj
is instead executed
- For a
j
command without a count, gj
is instead executed
It's not perfect - in visual mode, j
and k
revert to their normal definitions, but I'm really enjoying its more intuitive behaviour in normal mode.
Update - Dec. 18, 2014: TypeScript will fixing many of these issues in v1.4. That said, I'm more excited now about Facebook's Flow type checker, since it's more full-featured than TypeScript right now, even though it was just released, it seems more focused on expressive JS-oriented type checking than TypeScript, and the devs seem more engaged with the community.
TypeScript is Microsoft's attempt to bring type checking to the Wild West of JS. It also brings features such as arrow functions and "classes" inspired by ES6 spec drafts. After working with it for a while, I feel it has some nice bits, but on the whole it's sorely lacking. It seems to be trying to turn JS into C# while ignoring the drawbacks and limitations of that approach. Here I'd like to go through the major features of TypeScript and identify how it got them wrong.
Inexpressive Types
Despite having structural types (yay!), TS has a remarkably inflexible type system. It's lacking many powerful features that are common in modern structural type systems, which reduces its ability to model and verify programs. However, TypeScript has an even greater requirement in that it must be able to describe the types of existing JS code, which includes functions that wouldn't be allowed in many strongly typed languages. Even though this is difficult, TS falls short.
Union Types
The feature I find the most lacking is union types: where a value can be considered to be one of two types. This is so common in JS that I can't understand why TypeScript wouldn't include it. Sure, you can implement an Either<TLeft, TRight>
type in TS, but the lack of native support forces the use of any
in many cases, which removes type verification. One of the first bugs I had to deal with in TypeScript was caused by an Underscore function that returned a number or a given generic type, but the TypeScript annotation simply said that it returned the generic type (a bug that still exists). This problem has been raised, but there doesn't seem to be any interest from the devs, possibly because it's a feature that's alien to languages like C# and Java.
Higher-kinded Types
Consider the following interfaces:
interface Orderable<Coll<T>> {
sortBy: (comparator: (a: T, b: T) => number) => Coll<T>;
}
interface Mappable<Box<T>> {
map: <U>(f: (el: T) => U) => Box<U>;
}
(Those familiar with functors will recognize the second one, but I'm calling it Mappable to keep things accessible.)
These are pretty clear and useful types. They represent, respectively, collections that can be sorted to return the same kind of collection and types that contain a value that can be transformed with a function. You can then use them in less abstract types:
interface Sequence<T> extends Mappable<Sequence<T>> {
first: () => T;
rest: () => Sequence<T>;
cons: (t: T) => Sequence<T>;
empty: () => boolean;
}
function list<T>(): Sequence<T> {
function cons(e: T, l: Sequence<T>): Sequence<T> {
var me = {
first: () => e,
rest: () => l,
empty: () => false,
map: <U>(f: (el: T) => U) => l.map(f).cons(f(e)),
cons: v => cons(v, me)
};
return me;
}
var empty: Sequence<T> = {
first: () => null,
rest: () => empty,
map: () => empty,
empty: () => true,
cons: v => cons(v, empty)
};
return empty;
}
So our sequence type just extended Mappable and automatically got a definition for a map function that takes a T => U
function and returns a Sequence<U>
. This is nice for concisenesss and it enables us to write functions that can take any Mappable
or a similar type and handle them without having to know the underlying implementation. There's just one problem: TypeScript can't do this. More specifically, it doesn't allow nested generics like Mappable<Box<T>>
, where Box
and T
aren't known by Mappable
. Instead, we must write Mappable<T>
where the type signature of map<U>
is (f: (t: T) => U) => Mappable<U>
. That means that something extending Mappable
doesn't have to return the same Box
type. For example, our sequence's map function could return a promise, an Either
, a tree, or any other value as long as it implemented Mappable
. Also, the expression l.map(f).cons(f(e))
would cause a type error because TS wouldn't know that l.map(f)
returns a sequence rather than an unspecified Mappable
. This a violation of type safety, a failure to represent map
generically, and, more importantly, it prevents us from encoding useful abstractions like Mappable.
Failure to Model JS Values
In practice, TS types often can't represent JS values. There are just too many kinds of data and functions that are commonly used in JS for TS's limited type system to handle. One example is using arrays as tuples, which are generally implemented in typed language as sequences with a specified number of elements, each with its own type. Again, TypeScript has no support at all, making it impossible to correctly model JS code that uses them.
On the whole, you can look through the TS typings for just about any JS library and tell how bad a job it does by the sheer number of any
s in places where the actual type is well-defined but inexpressible by TS's poor type system.
(If you want to see a type system with a similar goal to TypeScript that does it a lot better, look at Clojure's core.typed.)
Faulty Type System
And, despite adding a type system for correctness, TypeScript fails to eliminate what's probably the most common error that a type system could fix: TypeError: <thing> is undefined
. This is because TypeScript does have one kind of union type: every type is actually a union of that type, null
, and undefined
. So I can write the following code:
var x: number = null;
console.log(x.toString());
And TypeScript won't bat an eyelid. In any real JS program, this represents a huge class of errors that will go unchecked. And it doesn't have to be this way; many modern typed languages require you to deal with nil values in a type-safe manner, as they should (Haskell, F#, Rust, OCaml, etc.). Again, this seems to be caused by the unfortunate influence of Java/C# and really reduces the practical benefit of TS.
Annoying Type Syntax
Functions
Function type signatures should be pretty simple, right? You just need something like (number, string) => string
, maybe with corresponding syntax for rest and optional parameters. Well, unfortunately, TS overcomplicates this. First of all, function parameters need to be named in the type, not just in the function literal. Not only is this unusual and redundant, it often leads to devs writing things like (n: number, s:string) => string
and creating useless noise.
The other bizarreness is that there are three different ways to define a function type, but you can't always use all three, depending on context.
map: <U>(f: (el: T) => U) => Box<U>;
map<U>(f: (el: T) => U): Box<U>;
map: {<U>(f: (el: T) => U): Box<U>};
So, TypeScript function typing is far more complex than it needs to be.
No Type Aliases
When you're working with a structural type system, the names you give types don't actually matter, since type compatibility is determined by the structure of the types. So, if you have something like this:
interface Foo {
a: number;
b: string;
}
You're just declaring the name Foo
to be equivalent to {a: number; b: string;}
. So it would make sense to have a syntax like type Foo = {a: number; b: string;};
. However, TypeScript went with a C#-ey interface syntax, which only allows type aliases for objects. So, there's no equivalent for these:
type OscillatorType = string;
type Deck = Set<Card>;
type Comparator<T> = (a: T, b: T) => number;
(Actually you can do the last one using interface
, but the syntax is clunky and weird.)
So aside from being more complex and less flexible than something like type
, interface
is far less intuitive. It's as if TypeScript is in denial about using structural types.
Clunky Intersection Types
A similar concept to union types are intersection types: where you specify that a value must satisfy two types. So, for example, if you have an argument to a function that must be a Thenable and a Runnable, you could ideally do something like param: Thenable & Runnable
. You can do this in TS, but it's messy because it uses interfaces (which are clearly pretty overburdened):
interface ThenableRunnable extends Thenable, Runnable {}
var myFn : (param: ThenableRunnable) => Thenable;
"Classes"
The other main change that TypeScript makes is that it adds "classes" to JS. I'm using quotes because it doesn't actually add any new semantics: a TS class is equivalent to a JS constructor. It adds some sugar to make it look Java/C#-ey, but ultimately it's still just functions and prototypical objects.
First of all, classes are the last feature I think should be added to JS. When we have higher order functions, we can construct much more powerful abstractions (see SICP/HTDP for this approach) rather than taking the messy, inflexible set of additions to C-ish structs that classes are. I understand that this is an argument I wouldn't win with many people, so I'm not going to go into depth, but this post explains well why JS shouldn't have classes.
Secondly, this leads you into the minefield that is this
. Rather than this
being bound like it is in Java/C#, it's generlly determined by the object the function is called from. This works to a certain extent when using prototypical inheritance, but in practice it leads to non-composable and unpredictable functions, as well as silliness like Function.prototype.call.bind(Array.prototype.slice)
. It's not hard to avoid this
, but TypeScript uses it enthusiastically in classes. That's further complicated by the fact that arrow functions have lexical this
(it's the instance of the class that they're defined in, not the object they're called on), while method-like functions and regular functions have JS's normal dynamic this
. So this kind of messiness is a clear example of why classes don't translate well to JS.
And lastly, classes complect type definitions with behaviour. I'm fine with them having inferred types, but often, in TypeScript code, one ends up being pushed into using classes in order to get the types of objects being easily shared between modules. Using interfaces or inference instead in non-classical code often results in longer code for defining types and having to use arcane features like ambient modules and TS's typeof
. You shouldn't have to use a bad construct like classes in order to get convenient cross-module object typing.
Does It Solve the Problems of JS?
So the recurring theme here is that the TS developers have repeatedly chosen C#-ey approaches over more useful ones (I assume C# since it's a Microsoft effort, but they could be aiming for Java-ey too). Whether or not you think this is a good goal is a matter of opinion, but I hope I've shown here that in practice, it integrates poorly with JS. In particular, a C#-ey type system proves to be very limited in modelling JS values and adding type safety.
So, if you consider lack of type safety to be JS's largest deficiency, TypeScript isn't an adequate solution. If you want ES6 features, TypeScript isn't an adequate solution, since it only has a few of them. And if you want classes like C# (ugh), then TypeScript isn't an adequate solution, since its classes are a thin film of sugar over totally different semantics. It only really works if you want a half-assed implementation of all three.
What I'd like to see instead is something like clojure.core.typed for JS. That is, something that only provides type annotation and type checking but that is designed to accomodate the way the language is written and therefore allows a far wider range of types. Not being based on a C#-ey type system would also allow the inclusion of more powerful type features such as higher-kinded types. Note that such a checker could use special comments for annotations, meaning that it could work with normal JS files. In short, a type checker that does one thing and does it well.
The concept of 'expressiveness' is one that appears a lot in programming language debates. Broadly speaking, it means the ease with which a language can express ideas. This is often taken to mean whether the language express common constructs prettily and tersely. But I feel that this isn't the whole story. For example, I'm not a fan of 'expressive' syntactical constructs like list comprehensions. Sure, they can express some common list operations in a readable manner, but they're inherently limited to the features supported by the syntax. Once you want to do an operation that isn't part of the list comprehension syntax, you have to fall back on the primitive constructs of the language, and you'd better hope that they don't uglify the whole thing. I feel that an expressive language should go further.
In particular, it bothers me to see people gushing about how the new Python-ish features of ES6 such as for..of
, list comprehensions, and classes will finally make JS expressive. I disagree for a few reasons:
- Expressive programming is an approach to programming, not a characteristic of the language
- This approach is already possible and easy in JS
- Whether a language can be used expressively is far more determined by its powerful generic features, such as first-class functions and metaprogramming, than problem-specific syntactical constructs, like list comprehensions
Instead, I would contend that expressive programming is about writing operations in a way that closely resembles a simple abstract description of each operation, using as few unimportant programming concerns as possible, and that this can be accomplished without highly specialized syntactical constructs.
I'll show some code to explain. A simple example of a non-expressive construct is the for loop. For example, it's commonly used to do a thing for each element of an array:
for (var i = 0; i < arr.length; i++) {
}
Conceptually, we want to just do something for each element of an array, but the for loop forces us to deal instead with handling an index variable and using it. This is an example of incidental complexity: it's an implementation detail we don't care about and that gets in the way of expressing the idea we want to express. A more expressive formulation would be the following:
forEach(arr, el => {
});
You'll note that this is very close to our original statement of the problem: do a thing for each element of an array, just with the words rearranged. It's way more expressive than the for loop, and we can make it ourselves very easily without needing to wait for the language designers to add special syntactical constructs like foreach
or for of
loops:
function forEach(arr, fn) {
for (var i = 0; i < arr.length; i++) {
fn(arr);
}
}
This shows why higher order functions are so important for expressive programming. They let you make abstractions like our forEach function that can abstract over behaviour, rather than just dealing with data structures. This means you have far more flexibility in the expressive constructs you use and you have access to a much wider variety of them without being limited by the syntactical constructs of the language. Any decent functional list library will have a range of functionality many times bigger than specialized list handling syntax can muster.
What this example also shows is that expressive programming is not just a characteristic of the language, it's an approach the programmer must take. This is especially true in JavaScript, where you have access to both high-level functional approaches and low-level C-ish constructs like loops and switch statements. And I would point to Scheme as a great example of a language that despite having few specialized features can be programmed very expressively (see SICP and the beautiful functions therein).
A good way to get used to this approach is to tackle problems in the following way:
- Determine an approach that will solve your problem in terms of simpler and more generic operations
- Write that approach in a straightforward expressive way, even if it means using functions you don't have yet
- Implement those functions the same way, writing them expressively in terms of smaller problems
- Continue until all of the necessary functions are implemented
- In some cases, generic operations might need to use low-level approaches, like the
forEach
function above
Let's do an example. Project Euler problem 3 asks for the largest prime factor of 600851475143. I'll use ES6 arrow functions for readability, but these are easily translatable into normal JS functions.
var projectEuler3 = () =>
Math.max.apply(null, primeFactors(600851475143));
So, we've expressed exactly what we're looking for: the maximum element of the list of prime factors of 600851475143. Now, we need to implement primeFactors. Let's use the following algorithm:
- Find the first number from 2 to sqrt(x) that is a factor of x
- If such a number exists, return that number, along with the prime factors of x divided by that number
- Otherwise, return an array just with x (because x is prime)
var primeFactors = x => {
var factor = find(range(2, Math.floor(Math.sqrt(x)) + 1),
n => isFactor(n, x));
return factor !== null ?
[factor].concat(primeFactors(x / factor)) :
[x];
};
Again, really close to how we expressed the solution. But we used the functions find
, range
, and isFactor
, so let's implement those. (The reason for adding one to the square root calculation is that range functions are generally inclusive on the lower bound and exclusive on the upper bound.)
var isFactor = (a, b) => isInteger(b / a);
var isInteger = x => x % 1 === 0;
A number is a factor of another if the result of their division is an integer, and a number is an integer if the number modulo one is zero. Reads like a book, although in reality you certainly wouldn't constantly reimplement these.
Now we're down to more generic, simpler functions. Unfortunately, given the features JS provides, these have to be a bit more low-level. But they're still much shorter and easy to understand than the monolithic solutions you usually see for these kinds of problems. They're also generic enough to be gotten from libraries or put in a library and reused.
var find = (arr, pred) => {
for (var i = 0; i < arr.length; i++)
if (pred(arr[i]))
return arr[i];
return null;
};
var range = (from, to) => {
var result = [];
for (var n = from; n < to; n++)
result.push(n);
return result;
};
So here, we're using for
loops to implement more the more abstract operations of finding the first element of a list that matches a predicate (a higher order function!) and getting a range of numbers. Any decent functional list library will provide similar functions, and Array.prototype.find
is coming in ES6.
And we're done!
So, this is how I see expressive programming. It's about coding your solutions as closely to the conceptual solutions as possible. This can be done with short, simple, pure functions and taking advantage of higher-order functions to make powerful, expressive abstractions. And it has many benefits:
- The problem-specific functions are short and easily understood
- Little incidental complexity
- Each function is easily testable and reusable
- Understanding and writing each function has a low cognitive load
- Low-level and difficult-to-read approaches are only used when really necessary
- Allows for easy construction of abstraction barriers
- No specific syntax needed
So, as a JavaScript programmer, the features I am most excited about in ES6 are those that help with this goal, like arrow functions, which make functions easier to read and use, and the new HOFs coming to Array.prototype. Those are the features that really help expressive programming in JS, not limited syntactical additions.
Many functional programming languages such as Scheme, Clojure and Haskell are heavily based on list processing, which has proved to be a useful approach for dealing with data and code alike. In particular, they tend to have a wide range of useful list processing functions that can simplify the use of lists while allowing them to replace constructs like loops. While JS doesn't share the elegance or theoretical purity of such languages, it took some cues from them in ES5 when the map
, reduce
, reduceRight
, some
, every
, and filter
functions were added to Array.prototype
. These higher-order functions added flexibility, better scoping, and simplicity to programming techniques that were usually previously accomplished with for
loops. JS is still lacking many of the useful features that functional languages use for creating and processing lists, but many of them can be implemented fairly easily in order to make it easier to use arrays and banish loops once and for all.
range
, or list comprehensions without the sugar
Languages like Python and CoffeeScript have list comprehensions: terse syntaxes for making lists with given ranges and constraints. However, I agree with the LISP philosophy that you shouldn't solve such simple problems by throwing more syntax at them - existing syntax should be used instead. For example, Clojure uses a couple of regular functions to do the same thing: whereas you could write (i * 5 for i in [1..5])
in CoffeeScript, the equivalent Clojure would look like (for [i (range 1 6)] (* i 5))
, which justs uses function calls and a binding form, maintaining syntactic simplicity. Well, the same approach can be applied in JavaScript with the following helper function:
function range(startOrEnd, end, step) {
var start;
if (arguments.length > 1) {
start = startOrEnd;
} else {
start = 0;
end = startOrEnd;
}
step = step || 1;
if (step > 0 && start > end || step < 0 && start < end)
return [];
var result = [];
if (step > 0)
for (var i = start; i < end; i += step)
result.push(i);
else
for (var i = start; i > end; i += step)
result.push(i);
return result;
}
This function acts more or less the same way as Clojure's range function:
range(5);
range(-2, 3);
range(10, 0, -1);
So, first of all, this pretty much replaces for
loops. Instead of writing for (var i = 0; i < 10; i += 2) { ...
, you can write range(0, 10, 2).forEach(function(i) { ...
, which is clearer in my opinion and carries other stylistic benefits. Also note that if you use map
instead of forEach
, you can get an array of results as well as getting the for loop behaviour, which can be very convenient and which brings us to our final goal of emulating list comprehensions:
range(1, 6).map(function(x) { return x * 5;});
That's nice and useful, but not very pretty. Fortunately ES6 arrow functions will make it look much nicer. (ES6 also has list comprehensions, which I don't think are really necessary, but ES6 is proving to be a mish-mash of features anyway).
range(1, 6).map(x => x * 5);
And there you have it: the for
loop and loop comprehension killer. You'll occasionally see hacks like Array.apply(null, Array(10)).map(Number.call, Number)
to get a range on the fly, but you're much better off doing it properly with a helper function or using one from a library like Underscore.
More than any other function on this page, I'd like to see this implemented as a native function, Array.range
perhaps. It's really the last piece in the puzzle to making full use of forEach
, map
, etc.
Zipping around
function zipWith(fn) {
var arrays = Array.prototype.slice.call(arguments, 1);
if (arrays.length < 2)
throw new Error('zip requires at least 2 arrays');
var length = arrays.slice(1).reduce(function(minLength, arr) {
return arr.length < minLength ? arr.length : minLength;
}, arrays[0].length);
var result = [];
for (var i = 0; i < length; i++) {
result.push(fn.apply(null, arrays.map(function(arr) {
return arr[i];
})));
}
return result;
}
This one is in Haskell and Clojure (as map
). It takes a function as its first parameter and at least two arrays as subsequent parameters, then it returns an array of the results of calling the function with the array elements at the corresponding indices as arguments. So for example, if you called zipWith(fn, arr1, arr2)
, it would return [fn(arr1[0], arr2[0]), fn(arr1[1], arr2[1]), fn(arr1[2], arr2[2]), ...]
. zipWith.apply(...)
is especially useful for working with matrices, but it has a range of other uses. Here are a couple of practical examples:
var arr1 = [1, 2, 3], arr2 = [6, 2, -1];
zipWith(function(a, b) { return a + b; }, arr1, arr2);
zipWith(function(a, b) { return a === b; }, arr1, arr2).every(function(x) { return x; });
And while we're talking about array equality,
Array equality
When you're using arrays as your main data structure, you need to be able to check whether one array has the same values as another. There are good reasons for arrays to be treated as unique for comparison operators, but you will need an equation like this in order to do functional-style list processing. Note that this uses deep equality testing for arrays and shallow equality testing for other objects.
function arraysEqual(arr1, arr2) {
if (arr1.length != arr2.length)
return false;
if (arr1 == null || arr2 == null)
return arr1 === arr2;
for (var i = 0; i < arr1.length; i++) {
if (Array.isArray(arr1[i]) && Array.isArray(arr2[i])) {
if (!arraysEqual(arr1[i], arr2[i]))
return false;
} else {
if (arr1[i] !== arr2[i])
return false;
}
}
return true;
}
arraysEqual([1, 2, [3, 4, 5]], [1, 2, [3, 4, 5]]);
arraysEqual([1, 2, [3, 4]], [1, 2, 3, 4]);
Repetition
range
is by far the most useful list builder function, but sometimes it comes in handy to make a list that's just the same thing over and over again:
function repeat(times, value) {
var result = [];
while (times > 0) {
result.push(value);
times--;
}
return result;
}
And some examples:
var googol = '1' + repeat(100, '0').join('');
function rollDie(sides) {
return Math.floor(Math.random() * sides) + 1;
}
repeat(10, 6).map(rollDie).reduce(function(a, b) { return a + b; });
Nesting and unnesting
Last of all, it's often useful to deal with lists within lists, so here are a couple of functions for that. This first one returns a list split up into a sublists of a given length.
function partition(n, array) {
var result = [];
var length = Math.floor(array.length / n);
for (var i = 0; i < length; i++) {
result.push([]);
for (var j = 0; j < n; j++) {
result[i].push(array[n * i + j]);
}
}
return result;
}
partition(3, range(9));
partition(4, repeat(16, 0));
And this takes a nested list and flattens it into a single layer.
function flatten(arr) {
var result = [];
arr.forEach(function(el) {
if (Array.isArray(el))
result = result.concat(flatten(el));
else
result.push(el);
});
return result;
}
flatten([1, 2, [3, 4, [[[5]]]]]);
flatten(repeat(4, range(2)));
Conclusion
The point of these functions isn't just to make list processing a bit less wordy; they allow you to manipulate lists in a completely different way. Instead of dealing with lists in they way you generally see in the functions themselves - changing an index variable to represent the current element, pushing and changing arrays of results, etc. - you can express most list functions in a single statement, with no variable modifications whatsoever. Then, you can build up list operations and eventually entire programs by combining such functions into other functions, without having to mentally keep track of variables changing state. It takes some getting used to, but it's a great way to program.
Here's an example: the Vigenère cipher. Basically, it's a Caesar cipher, but each letter is shifted by a different amount depending on a key, which if too short is repeated.
function modAddCapsLetters(a, b) {
return String.fromCharCode((a.charCodeAt(0) + b.charCodeAt(0) - 65 * 2) % 26 + 65);
}
function vigenereImp(str, key) {
var keyPos = 0, result = "";
for (var i = 0; i < str.length; i++) {
result += modAddCapsLetters(str.charAt(i), key[keyPos]));
keyPos++;
if (keyPos >= key.length)
keyPos = 0;
}
return result;
}
function vigenere(str, key) {
var keyRepeats = Math.ceil(str.length / key.length),
repeatedKey = flatten(repeat(keyRepeats, key.split('')));
return zipWith(modAddCapsLetters, str.split(''), repeatedKey).join('');
}
vigenere("LOOPSAREFORTHEWEAK", "LISPY");
So the imperative way does it step by step. It creates a variable i
that increases for each letter, a variable keyPos
that increases for each letter but gets reset to 0 once it's equal to the key length, and then shifts the letter in the input string by the given letter in the key. It concatenates each one to the end of a result string then returns it. Simple enough, but increases in complexity based on the amount of mutable data you have to keep track of. The functional way, instead, finds out how many times to repeat the key, creates a new array of letters with the key repeated the necessary amount, and then zips that with the original string using modAddCapsLetters
. It's really a matter of preference, but I find the latter way of doing things conceptually simpler, and that helps a lot when building larger programs.
Check out Underscore or Lo-Dash for implementations of many of these functions.
Note: Do not try this at work. It's not that bad but your coworkers and Douglas Crockford might get cross.
with
statements are a little-used, oft-reviled, and underappreciated part of JavaScript. Basically, they allow you to write a statement, often a block, with the properties of a given object added to the scope. Here's an example:
var x = Math.cos(3/2 * Math.PI);
with(Math) {
var y = cos(3/2 * PI);
}
x;
y;
The thing is, with
statements are almost universally renounced in the JavaScript community. For example, you can read Douglas Crockford's attack on with
from 2006 here. First of all, with
can slow code down by making it difficult for the engine to know what variable is being referred to. But the main problem with with
is that it complicates JS's notion of scope. Without with
, the variables available in a scope are all of the global variables plus any variables made in local scopes using var
or function
statements. All of these variables can be both accessed and modified. But using with
adds variables to the local scope that were not declared with a var
or function
statement and shadow those that were. Here's an example of the confusion that can be caused:
var obj = {
a : 1,
b : 2
};
with (obj) {
a = 3;
var b = 4;
b = 5;
c = 6;
}
Now, what are the values of obj.a
, obj.b
, obj.c
, a
, b
, and c
? ANSWER: obj
is {a : 3, b : 5}
, a
isn't defined, b
is 4, and c
is 6.
So there are good reasons to avoid with
. In fact, ES5's strict mode prohibits its use. But the level of hatred and fear directed at it isn't proportional to its flaws and ignores the legitimate uses of with, which I'll cover now.
Libraries and Modules
To use a library in JS, one generally has to constantly refer to its object when using its functions, for example using jQuery.ajax
instead of ajax
. This has led JS libraries to adopt short names such as goog
, _
, or $
for somewhat easier typing, but with the costs of poor readability and losing useful short local variable names. Not adding the library functions to the scope is fine for libraries that you aren't using much, but can be inconvenient for libraries you're using heavily, which is why most programming languages provide a way to import the functions of a module into the current scope. Well, JS has one too:
function randomAngle(steps) {
with(Math) {
if (!steps)
return random() * 2 * PI;
else
return floor(random() * steps) / steps * 2 * PI;
}
}
function randomAngle2(steps) {
if (!steps)
return Math.random() * 2 * Math.PI;
else
return Math.floor(Math.random() * steps) / steps * 2 * Math.PI;
}
Yes, it's the same Math
example. But it shows an important point: with
can make dealing with libraries and built-in modules a lot easier. Unfortunately, with current attitudes towards with
, we're stuck waiting until ES6 for a (hopefully) accepted way to do this.
Block Scope
One oddity that makes JavaScript different from most C-style languages is the lack of block scope. Instead of local variables beings scoped to nearest block they're declared in (delimited by {
and }
), like in C or Java, JS variables are scoped to the nearest function() {...}
in which they are declared. This is mostly fine (and in my opinion, not a problem at all if you use higher order array iterators), but can be occasionally problematic.
A common issue in asynchronous JS is using callbacks in a loop:
for (var i = 0; i < 5; i++) {
setTimeout(function() { console.log(i); }, 10);
}
So, since the for
loop finished executing before the callbacks were executed, the value of i
is 5 every time. In other languages, to get around this, you'd just add a block-scoped variable for each iteration of the loop. In fact, this can be done right now in Firefox, but won't be standard until ES6 (see Solution 1 below). So, the standard solution is to wrap the whole thing in an IIFE (see Solution 2) which is widely supported, but adds a lot of visual noise. The other solution is to use with
to emulate a block scope (Solution 3):
for (var i = 0; i < 5; i++) {
let j = i;
setTimeout(function() { console.log(j); }, 10);
}
for (var i = 0; i < 5; i++) {
(function(j) {
setTimeout(function() { console.log(j); }, 10);
})(i);
}
for (var i = 0; i < 5; i++) {
with ({j : i})
setTimeout(function() { console.log(j); }, 10);
}
Basically, with
lets you make block scoped variables. In fact, it's very similar to the let
blocks that are coming in ES6:
var a;
let (b = 2, c = 3) {
a = b + c;
}
a;
with ({b : 4, c : 7}) {
a = b + c;
}
a;
So, while function scoped variables are usually adequate, with
lets you use block scoping when you need it.
Summary
Pros:
with
makes it easier to work with libraries and modules.
with
allows you to clearly emulate block scope.
- In general, using
with
sparingly can make your code easier to read and write.
Cons:
- Using
with
poorly can result in unclear code.
with
is rejected by most linters and style guides.
with
can make code slower.
- ES5 strict mode forbids the use of
with
.
Conclusion
I've identified two cases where with
can make for clearer code and emulate features that exist in most other languages. However, the JS community's aversion to with
makes it almost unusable except in personal projects. Fortunately, both of its use cases will be replaced in ES6 by let
and import
, but for now, many coders are depriving themselves of a useful tool.So, don't use it at work, but if you have some hobby coding where readability is more important than speed, don't be too afraid to use with
.
A JavaScript question that often pops up is "How do I set the prototype of an object literal?" The short answer is that right now, you can't. When ES6 standardizes the __proto__
property, you'll be able to do so directly, but right now, there's no native language construct. The good news is that it is downright simple to make a helper function that will let you use object literals in inheritance:
function extend(proto, literal) {
var result = Object.create(proto);
Object.keys(literal).forEach(function(key) {
result[key] = literal[key];
});
return result;
}
You use it by calling it with the parent object as the first argument and the literal with the changes you want to make as the second: var myObj = extend(parent, {foo : 2, bar : 3});
Here are some more examples:
var dog = {
mammal : true,
domestic : true,
weight : 50,
speak : function() {
return "woof";
}
};
var littleDog = extend(dog, {weight : 10});
littleDog.speak();
littleDog.weight;
var cat = extend(dog, {
weight : 12,
speak : function() {
return "meow";
},
breed : "siamese"
});
cat.mammal;
cat.speak();
cat.breed;
So there you have it, an easy and useful construct for better differential inheritance. I'm sure I'm not the first person to use a function like this, and I bet you can find oodles of helper libraries that have something similar, but I think the ease with which you can make such an extension to JS's OOP model shows how awesome and flexible it is.
Notes:
Note: Don't try this at work. This is bad code that shouldn't be used in production.
As every JS hacker knows, eval
is evil. It's slow, insecure, and generally unnecessary. The same goes for the Function
constructor, which can, but shouldn't be used to create functions using strings to specify the arguments and body. But the badness of eval
and Function
doesn't mean you can't have some fun with them.
Function.prototype.toString
is a function that returns the source code of the function it's called from. For example,
var add = function(x, y) {
return x + y;
};
add.toString();
This is a feature that is mostly used in debugging, but you'll note that since we can get the function as a string, we can modify it and pass it to the Function constructor. This allows us to implement a feature JS has been sorely lacking, C-style macros!
Simple Replacements
So the first thing to implement is simple replacements, like #defines without arguments. Let's use an object to represent these definitions:
var defines = {
PI : '3.14159',
E : '2.71828',
GREETING : '"Hello, "'
};
And now we can implement the first version of the JS preprocesser (let's call it the JSPP). It takes a definition object and a function and returns another function with the macro expansion applied:
function getBody(fn) {
fnStr = fn.toString();
return fnStr.slice(fnStr.indexOf('{') + 1, fnStr.lastIndexOf('}'));
};
function getArgs(fn) {
fnStr = fn.toString();
return fnStr.slice(fnStr.indexOf('(') + 1, fnStr.indexOf(')')).split(',').map(function(x) { return x.trim(); });
}
function JSPP(defines, fn) {
var args = getArgs(fn);
var body = Object.keys(defines)
.reduce(function (text, replacement) {
return text.replace(RegExp('(\\W+)' + key + '(\\W+)', 'g'), '$1' + defines[key] + '$2');
}, getBody(fn));
return Function.apply(null, args.concat(body));
}
Example usage:
var defines = {
PI : '3.14159',
E : '2.71828',
GREETING : '"Hello, "'
};
var doStuff = JSPP(defines, function(val) {
return typeof val === 'number' ? PI + E * val : GREETING + val;
});
doStuff(2)
doStuff('Bob')
doStuff.toString()
Function-like Macros
Okay, let's go one level deeper: macro arguments. We'll use the same syntax as the Function
constructor, argument strings followed by body, except in an array:
var defines = {
ABS : ['x', '((x)<0?-(x):(x))']
};
And the new (buggy) JSPP implementation:
function getBody(fn) {
fnStr = fn.toString();
return fnStr.slice(fnStr.indexOf('{') + 1, fnStr.lastIndexOf('}'));
};
function getArgs(fn) {
fnStr = fn.toString();
return fnStr.slice(fnStr.indexOf('(') + 1, fnStr.indexOf(')')).split(',').map(function(x) { return x.trim(); });
}
function JSPP(defines, fn) {
var args = getArgs(fn);
var body = Object.keys(defines)
.reduce(function (text, key) {
if (typeof defines[key] === 'string') {
return text.replace(RegExp('(\\W+)' + key + '(\\W+)', 'g'), '$1' + defines[key] + '$2');
} else {
var macroBody = defines[key][defines[key].length - 1];
var macroArgs = defines[key].slice(0, -1);
var replacement = macroArgs.reduce(function(text, arg, index) {
return text.replace(RegExp(arg, 'g'), '$' + (index + 1));
}, macroBody);
return text.replace(RegExp(key +
'\\s*\\(\\s*(.+?)' +
Array(macroArgs.length).join('\\s*,\\s*(.+?)') +
'\\s*\\)',
'g'), replacement);
}
}, getBody(fn));
return Function.apply(null, args.concat(body));
}
Example usage:
var defines = {
PI : '3.14159',
E : '2.71828',
ABS : ['x', '((x)<0?-(x):(x))'],
RESISTORS_PARALLEL : ['a', 'b', '((a)*(b)/((a)+(b)))']
};
var doStuff = JSPP(defines, function(val) {
return RESISTORS_PARALLEL(PI, E) * ABS(val);
});
doStuff(2)
doStuff(-2)
doStuff.toString()
So there you go, everyone's favourite preprocesser partially ported to JavaScript! And while this is not a good use of Function
and Function.prototype.toString
, it does show their power. They both have legitimate purposes, and if you're not afraid to go down that road, you can do some pretty wacky stuff with them.
Good afternoon, Internet!
So I've figured I need a place to store neat things I find in JS and other programming tricks. Hopefully someone else can benefit from them.
So, without further ado,
var petStore = {};
petStore.birds = {
'American Bushtit' : '$98.99',
'Antipodean Albatross' : '$12.49',
'Auckland Merganser' : '$11.00',
'Barn Owl' : '$4.97',
'Chestnut-crested Yuhina' : '$25.00',
'Chiriqui Yellowthroat' : '$9.99',
'Common Redpoll' : '$5.50',
'Common Yellowthroat' : '$8.99',
'Crested Drongo' : '$12.50',
'Crested Quetzal' : '$44.99',
'Dimorphic Egret' : '$17.99',
'Doherty\'s Bushshrike' : '$10.05',
'Dwarf Bittern' : '$14.98',
};
console.log(
Object
.keys(petStore.birds)
.sort(function(a,b) { return ((-'Bonjour, monde!'.slice.call(petStore.birds[a],1) < -'Morning, all!'.slice.call(petStore.birds[b],1)) << 1) - 1 })
.map(function(s) { return 'What\'s up, bro!'.replace.apply(s,[/[^A-Z]/g,'']) })
.map(function(s) { return ('Howdy, y\'all!'.charCodeAt.bind(s[0])()-0101)*032+'Greetings, Earthlings!'.charCodeAt.bind(s[1])()-0101;})
.map(function(n) { return 'Salutations, Earth!'.constructor.fromCharCode(n+0x20); })
.join('Adios, amigo'.match()[0])
);