JavaScript You Need to Know For a Job

Published: June 5, 2015

How much JavaScript do you need for an entry-level job?

Jeff Cogswell recently posted some guidelines.

The absolute basics

  • Variables
  • Functions
  • The difference between null and undefined
  • And so on

Beginner’s List

  • Know the different ways to create objects, such as using the “new” keyword, as well as just declaring an object (such as ‘x = {a:1, b:2};’).
  • Know what a prototype is, what the “this” variable does, and how to use both.
  • Know the difference between a list and an object (and how a list is technically both, and can be used as both).
  • Know that functions are objects that can be passed as parameters into other functions and returned from other functions.
  • Know what closures are and how to use them. This might seem like an advanced topic, but when working with functions returning functions, it’s easy to introduce bugs if you’re not careful.
  • Know how to use functions such as the list’s map and filter functions. With this in mind, I encourage you to read this specification and learn the methods available on all types of objects.
  • Understand the built-in objects (they’re constructors!) and how to use them, including Function and Array (with capital F and A).
  • Know your way around the developer command line and debugger. All the major browsers provide these now.

Document Object Model

The DOM (Document Object Model) is the browser’s representation of a Web page. Vital aspects include:

  • Accessing the DOM directly from JavaScript. For example, know how to locate elements through calls such as getElementById, getElementsByClassName, getElementsByTagName, and so on. Also know how to use the newer selector methods: querySelector, querySelectorAll.
  • Accessing the DOM using jQuery. Again, jQuery isn’t part of JavaScript, but a lot of employers expect you to know it. Know the difference between $(‘a’) and $(‘.a’). A simple dot changes everything.
  • Understand the global object, how the browser provides the global object, and how you access it through your JavaScript programming. (Answer: The browser provides the window object (lowercase w) as the global object.)
  • Understand why the browser is the service implementing the global object and what happens when you move JavaScript code outside of the browser, such as to Node.js.

A lot of documentation presents the DOM API using what looks like C-language interfaces. That’s because under the hood, the objects likely are C objects. You access these objects through your JavaScript code. For example, when you call getElementById, you get back an element. But under the hood, that object is a C object with properties and methods.


  • Know how to call bind, call, and apply on a function, what the differences are, and why you would need to use them.
  • Know the different ways to create objects, including Object.create, and when you’ll need the hasOwnProperty method.
  • Know the different ways of implementing object-oriented programming, especially inheritance.
  • Know what promises are, and learn two important asynchronous libraries: async and Q. They’re used a great deal in server-side Node.js programming, but can also be a huge benefit in browser programming.
  • Learn server-side Node.js programming. It will really force you to become a JavaScript guru.

Source: JavaScript You Need to Know For a Job

Via Slashdot: How Much JavaScript Do You Need To Know For an Entry-Level Job?

This comment from Slashdot has good advice for the aspiring programmer:

If you want an entry level programming job and don’t have any experience, you’d had better made something non-trivial on your own time that you can show in an interview and explain the code. If I’m skimming your code and I see you picked a certain data structure or implemented a algorithm when there is more than one way to do it, you should be able to explain your reasoning for coding it the way you did. Also make sure you learn at least the basics of one of the popular frameworks and use it in your demo.

So make a Javascript web app, or something on the server side with a free or low cost hosting account. Make it functional, make it as bug proof as you can, make the code clean and easy to read, and be prepared to show it to a skeptical audience. Think of your interview as an audition and your code as the music you’re going to play.

If you can’t make something to show, you don’t know enough Javascript yet.


Browser Detection, Feature Detection

Published: February 5, 2013

Browser detection and feature detection: a brief survey of libraries and techniques.


“Modernizr is a JavaScript library that detects HTML5 and CSS3 features in the user’s browser.”

Taking Advantage of HTML5 and CSS3 with Modernizr


has.js – “Pure feature detection library, a la carte style.”

Feature Detection with has.js


Detector is a simple, PHP- and JavaScript-based browser- and feature-detection library that can adapt to new devices & browsers on its own without the need to pull from a central database of browser information.

Stack Overflow pages

Plenty of lively discussion — as usual — at Stack Overflow:

JavaScript: Detecting browser library function – does one exist?

JavaScript libraries to detect browser capabilities/plug-ins

Library to detect and parse browser information

Browser detection versus feature detection

How do I redirect my client to a different page according to their browser?

Need a good JS browser detection library but not JQuery

Further reading

Browser detection in JavaScript libraries

Detecting HTML5 Features

Browser and Feature Detection

Feature Detection: State of the Art Browser Scripting

Server-Side Device Detection: History, Benefits And How-To

JavaScript Template Engines

Published: October 29, 2012

Template engines for rendering data into HTML in the browser using JavaScript:

Ultimate jQuery List


“The ultimate list of tutorials and plugins for jQuery.”

Team and Scale

Published: August 28, 2011

I posed a question at Slashdot about software development methodologies, and got a very interesting, extensive reply.

My question sprang from my own experience with software development, involving “lots of conversations between developers and actual users, with notecards and pens”:

Nothing beats discussion for the kind of small-scale projects I’ve worked on. “It takes a whole village to write an application.”

However, I have to wonder how poorly it scales … I wouldn’t trust space shuttle development if it lacked extreme process control.

When does “takes a whole village” (development team) become “takes a city planner with hundreds of subordinates”?

I received this extensive, thoughtful reply from a Slashdot user I have since identified as Mike Charlton:

Scalability and process control are two different subjects, but I’ll try to answer what I think your question is.

I’m mostly familiar with XP. There are a lot of other so-called “Agile” methodologies, but XP has the most well defined set of development practices. Other methodologies don’t specify what you should be doing, so a lot is left up to the individuals (which can be good or bad).

But just because it is “Agile” doesn’t mean it isn’t well controlled. For example, doing full XP you should be writing acceptance tests (both manual and automated) for all functionality (though I have rarely seen anyone actually do this). The story cards along with the acceptance tests form the requirements. The “Customer” (I call it the customer proxy since they are rarely the actual customer) should be reviewing the application with respect to the acceptance tests at the end of every iteration. Stories are marked off as they are completed. Regardless of what you use to write the story on (I often used a wiki), handing the card in, or setting the status to DONE means it is finished. The iteration plan (or backlog in Scrum) gives your commitment for the iteration., etc, etc. From an ISO/CMMI perspective I can’t think of a single thing that is missing.

Of course many people view “Agile” as a synonym for “Ad Hoc” and proceed accordingly. This is unfortunate.

One of the knocks on XP is that it isn’t scalable. This is both true and false … if you are asking about scalability I might be tempted to say that you have too many programmers and would be better off with less.

The problem isn’t in the amount of code you can generate. The problem is that it is very, very difficult to specify a lot of requirements at once and have it not turn into a [mess].

Your bottleneck is not in your programmers, it is in your customer proxy. Traditionally we ignore true specification (or user centered design) and let the programmers make decisions. If the decisions are questioned at all it is by the QA people (which results in a lot of pointless back and forth arguing).

Nobody has the big picture because the millions of tiny decisions made everyday are impossible to absorb by a single person. The cure for this is time.

Let the program take longer in development in order to understand what it is you truly need. Have someone play with the system and understand the implications of the changes before making the next step.

So, scalability is not really more of a problem for XP than it is for other methods. It’s just that the problem is more obvious.

If you really need more programmers working, then an issue tracking system is a decent replacement for index cards (although most issue tracking systems have difficulty assigning ordinal priority). Your requirements will still [prove unsatisfactory], though, just like other methods.

If you can partition your problem and find a group of application designers (i.e., subject matter experts who can tell you what your application should be doing) that can communicate with each other well, then you can easily scale XP. I think you can probably keep one of those designers (customer proxy) busy with 8 programmers (or 6 good programmers).

Depending on the problem I suppose that would allow you to scale to 50 programmers or so. More than that will probably raise the risk unacceptably no matter what method you choose. But 50 programmers should be able to maintain a rate of about 30K lines of production code per week (in fully TDD code). I have to say that I can’t imagine wanting to go faster than that … Finding 50 programmers who can do XP well might be rather challenging, though.

wrook (134116) @ Slashdot

I was so taken by the depth of thought, and the implicit personal concern, that I tracked down the author, sent a personal email, and received in reply an email that amounts to another generous essay, which I present here in full by permission:

Hi Karl. Sorry for the late response. I’ve been travelling over the holiday season. Thanks for reading my post. I often wonder if anyone gets any value out of the long rants I make I’m “retired” from the software biz, so I’m not doing anything except my own stuff right now. But at one point I had been coaching a team of 40 non-colocated people on a project. It was rather interesting. We had about 15 programmers split into 3 teams, we also had a team of about 10 QA people, some documentation people, and the rest UI/Customer proxy people.

It wasn’t exactly optimal, but it’s what I had to work with. People were used to their old way of working, so I had to keep their roles similar to what they had done before. The biggest change I made was to the QA team.

I had the customer proxies (ex program managers and marketing people) write stories. They were reasonably good at this since the initial stories didn’t have to be complicated (which would have been beyond them). We took them to a planning game every 2 weeks (better would be every week, probably). QA came to the planning game too. Each story was explained, questions asked and the devs did estimates. QA also did estimates on how long it would take to write a manual test script (i.e., manual acceptance tests) for each story. Any story that needed to be split up again (often happened) was punted for the next meeting (which is why one week intervals is better).

My ex program managers were horrible at prioritizing work. They couldn’t put it in ordinal order (only high/medium/low priority, with everything at high). So I would prioritize the stories myself and then tell them where the cutoff line was for the iteration (2 weeks) and let them freak out. After that they could understand what was the most important stuff. Don’t sweat it too much. If they see the iteration plan and are OK with it, then everything will be fine. Just try to have 1 week or 2 week iterations so that they can change their mind frequently. Never add things to iterations mid-stream — they only have a few days to wait until the next iteration anyway!

Real requirements were written by QA and UI people. These were done *concurrently* with the code being written. The requirements were written in terms of manual acceptance tests. These are easy to write, so they are always finished before the code. But often the programmer would have to go to the QA/UI person and communicate what they were hoping to do or ask what they should do. This worked very easily and I didn’t have to do anything special to solve problems.

The programmers were not allowed to move on to the next problem until they had *personally* run the acceptance tests from UI and QA. QA didn’t run them at this point. If QA wasn’t finished the tests, then the programmers were expected to help with them (it never happened).

At the end of the iteration when everything was integrated, QA ran all the tests once more. If anything failed, then the story was either fixed or removed for the next iteration (usually the latter). It very rarely happened.

My intent was that the customer proxies would take the load at the end of the iteration and play with it looking for problems, etc. This was very naive. As ex program managers, they felt that they shouldn’t have to use the app at all. And on the other hand, the QA manager was worried that QA was only testing functionality that we were pretty sure worked, but not doing overall system testing. This was my biggest mistake.

I recommend having an entirely separate team of QA people whose job is to take each iteration build and find out what [needs work]. This is what they are used to doing anyway. Don’t call these bugs!!! They are just ordinary stories. If you call them bugs, then people get all weird about assigning them higher priorities than new work. In reality, we can live with some bugs, we are only trying to maximize useful functionality. UI specialists should also review each iteration looking for work flow improvement opportunities.

Documentation also worked concurrently with other development. They have to talk to the UI and QA people to get the details of the stories. As the stories change during the iteration, they are expected to change their documentation. When QA is doing the integration testing, they are checking their documentation against the actual implementation and making changes if necessary. Getting them to do this was very difficult. They had real difficulty imagining what the feature would look like without seeing it for real. In the end I encouraged them to make outlines of the documentation and then gave them access to the nightly builds so that they could see the stories as they were being checked in. With practice they could get everything done on time, but you need to give them a lot of support at the beginning. Try not to give them too much pressure.

They also bristled at having to rewrite documentation when things changed from iteration to iteration. But by showing them how the programmers did the same thing, they realized this was just an opportunity to improve their writing. In the end, I highly recommend doing documentation this way as it gets the doc people integrated into the process and makes them feel a part of what’s going on.

Finally, one of the best things about doing things this way was that our release planning was trivial. Every iteration build was a release candidate. Everything was done including the documentation. We could demo every iteration build (every two weeks in our case — but try to get it down to 1 week).

We could send “betas” to our customers as well for feedback. When the customers were at the point of “We want to buy this”, we could ship it in a matter of weeks.

Anyway, I’ve ranted enough given that you didn’t ask for this. But if you have any questions, please give me a shout.

— MikeC [1/4/2011 11:13 PM]

Thanks again, Mike. I find your experiences full of insight, and I’m pleased to share them on the web.

Adapt.js: JavaScript Alternative to CSS Media Queries

Published: April 26, 2011

Adapt.js is a JavaScript library (or framework) that helps you design web sites for mobile devices:

For many developers that means using @media queries to selectively target the device screen size and orientation through CSS.

While the @media approach is a good one, it won’t work for every site. That’s why Nathan Smith, creator of the 960 Grid System, has released Adapt.js, a lightweight JavaScript library (894 bytes minified) that allows you to specify a list of stylesheets and the screen sizes for which they should be loaded. Essentially Adapt.js does the work of @media, but will work in any browser, even those that don’t understand @media.

… While using JavaScript to load CSS might seem a bit strange, even if you use @media queries you’re still going to need some kind of polyfill (usually JavaScript-based) to handle those browsers that don’t know what to do with @media rules.

Scott Gilbertson @ Webmonkey

‘Scrapers’ Dig Deep For Data On Web

Published: April 19, 2011

Web scraping continues to be a profitable and controversial practice:

The practice of Web ‘scraping’ is growing as many firms offer to collect personal, and potentially incriminating, data about users from their social networking profiles and discussions. Many companies even collect online conversations and personal details from social networks, job sites and forums where people might discuss their lives and even potentially sensitive data, such as health issues. These scrapers operate in a legal grey area leaving many users exposed.


Slashdot notes: “We ban scrapers like this regularly here simply for not adhering to the rules spelled out in robots.txt.”

For more about robots.txt, see [categorySeeAlso slug=”robots-exclusion-standard”].

20 online topography tools for web design

Published: April 8, 2011

20 online topography tools for web design by Ryan Boudreaux

The author uses the word topography* in the sense of surface features:

The overall appearance and composition of the page elements, including both type, graphics, images, textures, and the total look and feel of a web design. While most of these tools are typographical in nature, the list also includes tools that help with conversions, tuning, CSS, banners, images, and colors.

See 20 more online tools for web design.

(*[categorySeeAlso slug=”look-and-feel”] is a more commonly used term.)

Deliver Email using PHP

Published: March 22, 2011

“This article describes the different ways to send e-mail in PHP and comments which one would be fastest depending on your circumstances.”
— Manuel Lemos

1. PHP mail() function
2. SMTP server relay
3. Sending urgent messages by doing direct delivery to the destination SMTP server
4. Sendmail program
5. Qmail, Postfix, Exim, etc..
6. Microsoft Exchange pickup folder
7. Putting all recipients in Bcc headers
8. External Web services
9. Caching message bodies

jQuery Mobile Alpha 3 Released


The jQuery team reports:

We’re pleased to announce the third alpha release of the jQuery Mobile project. This release includes a number of bug fixes and enhancements to the original jQuery Mobile Alpha 1 and jQuery Mobile Alpha 2 releases.

jQuery Mobile Alpha 3 Released

Via Ajaxian