Here There Be Dragons

Here There Be Dragons

One thing that drew me to becoming a web developer was that it was a world of unknowns. In 1996, I was 15 and the world wide web became a “thing” as it was popularized. I started playing around with Amaya one day, at my dad’s behest, and I found what you could do with it quite neat. I discovered a huge world of possibilities even though it was merely text and a couple of images at the time. I’d play around with web technologies on and off throughout the late 90’s, but never thought I’d spend my life with it.

Aside from web development, my only other job involved a finite set of possibilities (burgers, fries, and occasionally gravy). Being able to do something perfectly over and over again is a wonderful goal, but if I’m not learning something I lose interest quickly. After I discovered how much the web community was trying to solve for their users, screen resolution and browser support, I was hooked again.

I determined that becoming a Web Developer would be my “thing”. I went forward and learned and learned. I managed to find a paying job in 2004, and still continue to learn. Despite learning with web tech since 1995, I’ve found that the biggest ideas and revelations for me have occurred in the past five years.

It’s dangerous to go alone! Take this.

As a Web Developer, I’ve found that the following 10 items have been the most important. Some of these I’ve learned while at the office at zu, but most I’ve discovered outside office hours and have brought back to my co-workers.

1. Building a website for a specific set of browsers and/or screen sizes doesn’t work well

The industry started to learn this after we began building responsive websites. However, it’s still commonplace to see discussions online about what browsers to support or what devices to support. The sides that want to drop support of certain browsers will mention how small that browser’s percentage is worldwide. But, after that percentage is translated into real numbers, it’s revealed to be hundreds of millions of users.

We were able to build websites just fine for those browsers in the past, so why should we build to exclude people? If we build with technologies that work well for all users and enhance upon it for edge browsers (browsers that update almost monthly), we can have content be accessible for almost every human being.

2. There are enough CMSs in the world, we don’t need to build our own

There are hundreds of CMSs in the world and less than a handful are well supported. Drupal and WordPress each have thriving communities that are pushing the software forward. There are some proprietary CMSs that are “okay”, but when it comes to needing help to do something that isn’t “out of the box” with the system, it becomes expensive.

With popular open source solutions like Drupal and WordPress, finding an answer to whatever odd request a developer may have is quite easy. At zu we built our own CMS years ago, but found that trying to keep it on par with other open source alternatives was an enormous effort. As well, we were solving problems by ourselves - an inefficient use of resources. When we work with Drupal or Wordpress, we often find that someone else has solved the same problem already.

3. Working as a team enhances expertise and sense of ownership

After zu moved to having developers in teams that were specific to certain projects, the quality of our projects skyrocketed. Developers only needed to focus on a few projects, and their team became the sum of knowledge internally for that project. This made new features or site enhancements easier and faster to build, as the team members either knew what needed to be done, or knew who they had to talk to for help. Developers are also able to provide insight and new ideas for clients, which they wouldn’t have received if we did not have them focusing on fewer projects.

4. Working in sprints with smaller tasks helps us to determine the amount of work we can complete

Determining the amount of work we can do within a given period of time has opened up lines of communication with other individuals tenfold. By planning and splitting up tasks into manageable chunks, a developer is able to say “this will take about this long” with much higher accuracy than with large tasks. We’ve found that this also gives greater control over our codebase. As well, task completion in smaller chunks means that other roles receives feedback sooner rather than later. Designers know when Developers need something, and Product Owners can report on level of completeness to relevant parties much quicker.

It feels as though we have a much nicer workflow when we have input of small achievable tasks instead of large “just build this huge thing” tasks.

5. Deploying new code via FTP doesn’t work well and is stressful

For the longest time, maybe about 15 years, I updated the code on a website by using FTP. During those updates, there was a chance that a file could be uploaded before another file was up, and a user could have an unexpected result (typically a code error, giving the user nothing to look at). As well, if something didn’t work as expected after uploading, then we needed backups in place and a lot of manual work to get things working again. After implementing deployment tools, like Capistrano or Fabric, I found that code deployment wasn’t as big of a deal. Worst case scenario, a deployment could be moved back to an earlier version. The stress of code deployment was removed.

As well, combined with source control (noted below) we are then able to easily deploy a version of the website and know what was deployed and when.

6. A request to the server is expensive, so be responsible with what is requested

A web browser can only download so many web assets at a time. This is due to HTTP/1.1’s limitations and will not be as much of an issue with HTTP/2. With too many requests, it can take a long time to display a webpage, even if the total filesize of that webpage is quite small.

Understanding how a browser loads assets and how it implements the technology is important for any web developer to know. Knowing the restrictions of the technology shows you how best to implement it.

7. Web technologies are fragile and under certain circumstances, something will fail; build to be resilient to those failures

Because of how a web browser requests assets, there’s a very minor chance that an asset could not be sent to the browser. It’s a rare occurrence, but when you’re on the web as often as web developers are, you see it fairly often.

When we factor in the following, we need to ensure that a website is still usable, even if something doesn’t work. Globally, users have:

  • Many different web browsers
  • Many different devices all with different hardware
  • The users all have varying bandwidth speed. Even users with great wifi speed do not experience that speed all the time

Being able to test on all those devices at varying bandwidth speeds would be an impossible effort and expense. Building our sites so that they can load in a predictable way, no matter the device and no matter the bandwidth, can be a bit tricky.

While building our sites we proceed with “if this fails, what happens?” along with a progressive enhancement methodology to build resilience to failures that may come from any number of those devices.

8. Source Control saves your work for you and Code Reviews stops you from submitting bad or broken code

For the longest time I had extra backup files that I manually created. I would backup entire websites when I only needed to fix a minor bug in one or two files. After learning SVN and later Git, I no longer had to be concerned with manual backups. As well, with git’s workflow, my teammates now know what I’ve done and why.

Teammates can review code submissions before adding it to the main codebase. One project I worked on long ago did not have source control, and an issue with the server combined with our text editor ended up removing the contents in several files I had open. Since we didn’t have backups, all that work was lost forever. If we had source control on that project none of the work would have been lost.

9. Web technology will continue to advance, but not all users will

Every couple of months at least two popular web browsers are updated. With those updates come new technologies that web developers can take advantage of. However, not all users are able, or willing, to update their browser as quickly, or at all. When we utilize these technologies we need to know what happens for a user who cannot use such features. This knowledge was gained by learning how to build resilient websites along with looking at browser adoption worldwide as a number of people, not a percentage of population. Seeing 319 million people 1 as opposed to 1% of browsers helps us to understand the weight of the decision we make.

1 319 million people is 1% of the internet population, at the time of this writing, estimated here.

10. Any rules we set for ourselves will likely change, so use tools to determine the best route

Any and all rules we as developers have learned will likely change. Years ago it was best to design our sites for 800x600 or 1024x768 screen resolution. Since then we have learned that building for any possible resolution gives much more benefit.

For the past few years in web performance, it was considered best practice to minify our stylesheet files, because there were less bytes that needed to be sent with its TCP connection. However now with HTTP/2, it’s counter-productive to minify the files as HTTP/2 interprets the code as it’s sent over. In order to prove our rules are still useful, we must test them. Another rule that Web Developers have placed upon themselves is to only have javascript at the bottom of the file. There are situations where having the JavaScript within the main content of the file is beneficial and gives a better experience for the user.

All rules can and likely will change. Anything I noted up above will likely be different in another five years. It’s up to us to change along with them.

 

Author: Devon Rathie-Wright