Why side projects fail

So, my side projects fail! Let me first describe what I would consider a successful side project.

A side project is something:
That is fun to do
To learn from
That reaches an audience beyond its creator and its friends and family

Reaching an audience beyond friends and family make side projects something more than a hobby. Take for instance the photo calendars I make for my relatives every year. I would not consider those side projects, even if it’s fun making them and I learn from it.

Added to that, if a side project earns you decent money, to me that is just work. An early stage startup can be a side project, but as soon as it takes most of your working week, I’d say it’s more than that.

My side projects have always been fun and good learning experiences. At least when I started them. This made me hesitant to call them failures; I don’t regret having spent time on them. But to call them successful, they should have reached a big significant audience. They never did. I used to think that it was just me being lazy and bad at marketing, but I realized that there is more to it.

Failure 0 – Not promoting the project

Let’s not ignore that despite all my other mistakes, I still sucked at promoting my projects. My prioritization application that never got more than 2000 users probably just was lacking that. According to users, I designed and built Prioritizer well enough to be useful. I created a website for it with a blog for SEO. I also put the application on download.com. When I found that only few people were downloading it, I gave up on improving it. A good product sells itself, I told myself. That may be true when Amazon or Google launch something without telling anyone. Except, you know, they have whole departments that make sure that the right people will notice and that it ‘organically’ goes ‘viral’. Private side projects compete for the same kind attention as the new products of multinationals. So yes, promotion is essential for a side project.

Failure 1 – Not publishing intermediate results

Me and a good friend spent many nights for many months, years perhaps, making a comic. A lot of brainstorming, storyboarding, defining our style, sketching, inking. And procrastinating, talking about what Web 2.0 would bring—it was that long ago. One of my favorite side projects! We never finished coloring before we ran out of time and motivation though.

We could have showed the sketches to our friends. We could have published the inked version. But no: we wanted it to be full color, printed. If only I had known the concept of Minimum Viable Product back then!

Failure 2 – Being perfectionist

One evening me and a couple of design friends were discussing how bad some websites were and how little visitors can do about it. We invented the Website Police: a website where people could report bad sites using a quasi serious form. It would have options like “it’s too damn ugly” and “I can’t find the info I need” and a text field for other issues. Upon submission, we’d send copies of the report to the info@ mail address for the website’s domain. We‘d also spam a whole bunch of other predictable email addresses. Our website would show an archive of all the reports.

Building the website with turned out to be quite straightforward, albeit a bit time consuming. Nonetheless it felt like something useful to do in my downtime as a freelancer.

Design-wise it was harder. I believed only a perfect website could do the job of shaming bad websites. Of course it never looked good enough for me to confidently promote it. Looking back, an ironically bad looking, brutalist visual design with simple interactions would have been a enough.

Feel free to steal the Website Police idea by the way. I still think it would be a great way to get attention for good design. As well as a lot of bad leads for your freelance business.

Failure 3 – Partnering with people like yourself

Because my life doesn’t completely follow a listicle outline, this fail is about the same project as the previous project. From the start, I partnered up with another freelance designer. It turned out that our strengths and skills had a lot in common, which felt good and made communication easy. It also made that we ignored and got stuck on the same issues. I’m not saying that it wasn’t useful to be two of us. It would have been better though, if we had had someone else with a different mindset to collaborate with.

Failure 4 – Not writing about it

I connected a CO₂ sensor with an Arduino and an SD card slot. I also wrote a program so I could visualize and track CO₂ levels in my home. Why? CO₂ is not noticeable, but does affect your mood. It makes you feel tired and think less well. Also, I wanted to play around and build electronic circuits.

I managed to get this to work, but never got to sharing my experiences. The code is out there  on Github, but I don’t think anyone ever looks at it. This could have been a helpful example for others but never was.

Failure 5 – Not having a plan

This is about a special project: find out what factors in my life cause me to feel good or bad. A sort of quantified self study on qualitative factors. I read some social psychology papers about measuring happiness. Based on real studies, I created a personal questionnaire with about 25 questions. It included stuff like “How well-rested do you feel?” And “How stressed are you?” I answered the questions several times a day at random times using the Reporter app. Yes, it was weird addressing myself like that.

This turned out to be too ambitious as the dataset got so big that I still haven’t analyzed everything.
In fact, I didn’t have a plan for the whole thing at all. I wanted to write about it, but as the data are so personal and I never got to interesting conclusions, I didn’t. Statistical analysis is not something I’ve done ever since graduating, so I could have seen coming that it wouldn’t be a easy. Not all was lost though. You’d expect that the reminders for the questions five times a day would be a major distraction, but I enjoyed doing it. It made me more mindful. I also think it made me more aware of stress factors that I could deal with before they got big.

Failure 5 – No time constraints

While living abroad, I used to have a lot of fun blogging for friends and family. Now I write about work stuff I have much higher standards. That causes me to postpone finishing posts, not starting new ones and writing less altogether. This of course leads to less practice and boring texts if ever I post anything at all.

I believe I did my best writing on a guesthouse computer in India with a couple of people behind me waiting for that computer to become available. Done is better than perfect. I should force myself to just finish something within 30 minutes.

Failure 6: Giving up on a tool/programming language/framework

Twice I gave up on a project because by the time I (nearly) had an MVP, the technology I was using was getting outdated. I’m not an engineer and I don’t need to know all the latest JavaScript frameworks and whatnot, But when I spend time learning, I want to be sure I’m not learning things that are irrelevant in a year or two.

That used to be my attitude at least. I now think that these technologies are tools that all have a lot in common. Continuing to build something using a framework that is being abandoned is [not a bad thing in itself | PowerPoint schilderij]. If you’d pick any successful product that was started three years ago and had to build it from scratch, you’d probably choose a different tech stack today. Also, many technical challenges are independent from the programming language. So continuing to use a tool that I know would actually let me learn more generalizable things than when having to learn all the concepts of some new tool.

Failure 7 – Ignoring professional experience altogether

A side project is not supposed to be normal work. Still, it is a bit like work: it involves similar activities. So how can a side project still be so much fun that I want to spend my free time on it? I used to think about this all wrong.

Two years ago I came up with the idea of making a big online countdown timer clock for workshops and
presentations. I made a JavaScript application that kind of worked. But after the first round of testing, the design turned out to be all wrong. By that time I had invested so much time, that I didn’t feel like pulling apart my messy code and make a new version. It seemed like such a lot of work.

The root cause of this is that to make it not feel like like real work, I threw all my professional experience out of the window. Some basic mistakes I made were:

  • Testing too late: I did show other designers my sketches, but failed to properly test the design before starting development.
  • I didn’t do any project management; didn’t even have a backlog! As a result I only worked on what I thought was most urgent, like nice transitions between portrait and landscape orientation.
  • I was sloppy with version control
  • I had no continuous deployment or even any deployment setup.

That sounds pretty much like working for the worst boss I’ve ever had. It had to be fun, right?

In the end it was tedious work to fix my own mess, not having fun and not learning anything anymore. I never shipped it. A complete failure!

Back to the question: how can a side project be fun? Instead of treating it like not work, I now see it like work under ideal circumstances. Where I can pick the tools, tasks and people however I like, following my own preferred processes. Meticulously. This way I can spend the occasional half hour that I have productively, because my files are nearly organized and repetitive tasks are auto mixed in scripts.

The happy end

A few months ago I picked up the countdown timer project again. This time I’m not just hacking it together. I’m following a proper design and development process: I test it with the target group and deploy my in between results regularly. I will write about how it’s progressing in a few months again. In the meanwhile, you can have a look at the big timer and let me know how you would use it.

Choosing colors depends often on the designer’s taste. This can lead to tough discussions with stakeholders that do not share that taste. Everyone has gut reactions toward colors, but expressing why an element should have a certain color rarely goes beyond that. Because I think discussions about personal taste should not be a big part of design critique, I decided to learn more about color matching rules. After all, when I can explain why certain colors are a good choice, others can react to those reasons. While analyzing color palettes, I quickly found out that I needed a better understanding of what color actually is and how we people perceive it. In this post I will share what I learned so far.

What is color?

Color is a person’s perception of an object’s reflected or emitted light. It depends on its wavelengths, brightness and its environment. Yes: color is in our own individual experiences of what we see. But let’s get into the easy stuff first – the physics – and then deal with that personal perception.

In order to have a precise discussion about color, it helps to have a formalized way of defining colors. The model I find most useful is describing a color’s brightness, hue and saturation. Brightness is the easiest to understand, it defines how much light an object sends to our eye balls. The other two are weirder and more interesting.

How is wavelength related to color?

Hue is the quality of a color that we can describe the easiest with our common labels like red, yellow and violet. Most pure colors have a spot somewhere on the visible spectrum of light:

The visible spectrum of colors. Or actually: a compressed version of it, because no display can really show them all.

Our eyes do not have some sort of nanometer sensors to measure the exact wavelength of a light source. Instead our eyes are sensitive only to three ranges of wavelengths that peak in the wavelengths for red, green and violet:

The normalized sensitivities of the cones on our retinas for violet, green and red:

Chart with the relative sensitivities of retinal cones

Image by Bruce MacEvoy. I recommend spending a few hours reading his excellent page about color perception. Much of this post is just a simple summary of it.

I’m showing the normalized version of the chart above, meaning that the sensitivities for the three cone types are all set at a maximum of 1.0. In reality, our violet/blue-sensitive cones are much less sensitive. However, we have more cones of that type, which levels out the lack of sensitivity a bit. Moreover, our eyes and brain compensate for the differences in sensitivity. It actually does that compensation in an odd way, for instance making cyan look brighter than violet.

What happens if you mix colored light?

Although our retinal cones sensitivities peak at only three wavelengths, they pick up signals beside and between those peaks. Most wavelengths trigger two cone types and the ratio between how much they trigger the two defines where on the spectrum we identify it.

Imagine we point one beam of light that contains green light (meaning it has a wavelength of 500 nm) and one with red light (650 nm) at the same spot. That spot does not reflect light with a wavelength of 550 nm; the light itself is not affected by the mixing. We do however perceive the color that lies between green and red on the spectrum!

Blending green and red makes yellow

Our perception of light with two wavelengths does not equal a straight average of the two.

Can you create all colors with red, green and blue?

So computer displays show images with pixels and each pixel has a three tiny lights: a red one, green one and blue one. But can such an RGB array create all colors? In theory, yes, well, almost. As we discussed, the three cone types cannot discern between a mix of two narrow ranges of wavelengths and a spectral color. So if those narrow bands are aligned with the peak sensitivities of our retinal cones, the cones can be stimulated just like they are by spectral colors between the sensitivities.

The red and green subpixels create everything from red, orange, yellow to green. The green and blue subpixels are for the well, green, cyan and blue hues. Red and blue form purples and magentas.

Why don’t we have RGV displays?

Hey hold on, if our short wavelength cones have peak sensitivity in the violet area, why don’t we use red, green and violet light to display colors rather than red, green and blue?

As shown in the cones sensitivity diagram, our red-sensitive cones have a small peak in the violet area. We see saturated blues when:

  • Short wavelength cones (violet/blue) are triggered
  • The mid range cones (green) also quite a bit
  • The cones for red barely.

So you could mix a blue with violet and green light, but it would not look saturated, because our red-sensitive cones would pick up that color too. With red, green and blue we can accurately describe all hues in between.

Moreover: because of the small sensitivity peak of the long wavelength cones (red) in the short wavelength area (violet), we can mix purples looking just like violet by adding a bit of red to blue. In practice, that is the reason that displays cannot show violet as saturated as we can see it reflected from real objects.

Where are purple and magenta on the spectrum?

Purple and magenta are examples of non-spectral colors; you can only make them by mixing wavelengths. Red and blue, as you know. Now unlike with mixing red and green, we do not perceive a color between those two on the spectrum! By averaging the wavelength, you would imagine you’d get green:

A blue and a red circle overlapping. The overlap is purple.

Combining wavelengths means you get something in between, but it’s not a mathematical average.

Purple looks like violet instead! The reason is that violet light does not only activate our short wavelength cones, but also the long wavelength cones for the reds. Purple also triggers both these types, making our brains interpret them as similar. Magenta is like purple with more red.

Gradient showing red, magenta, purple and violet

Why make a color circle out of a linear spectrum?

Color circles are used a lot in color theory and design software, but why do they exist at all if the spectrum is linear? Because purple and violet are so much alike to us, we can create a smooth gradient between the most opposite colors on the spectrum: violet and red:

Two spectra that overlap in the middle of the image.

In the center I have blended the extremes of the visible spectrum. You see purple and magenta there.

By drawing the spectrum as a circle so that both ends overlap, we get the same effect:

The spectrum bent around a circle. Its ends overlap at the bottom.

Again, this only makes sense because our long wavelength cones are also sensitive to short wavelengths. For creating colors for animals that have different cones sensitivities, this model would not work. In fact, most of our color images do not look anything like their reality.

The color circle contains all perceivable hues. Another neat feature of the color circle is that you can find contrasting colors: they lie on opposite sides of the circle.

Describing colors

Only a hue does not make a color: we need to define the saturation and brightness too, to describe a color completely.

With values for hue, saturation and brightness we can accurately define colors in a way that is close to how we describe colors verbally. Take for instance ‘vivid light blue’. The basic name of a color, ‘blue’, describes the hue. ‘Light’ means high brightness. Vivid means high saturation.

These are the three properties in the Hue/Saturation/Brightness model that is used in a lot of software. It has some issues (I should write about that), but the model is complete, precise and in many ways close to how we perceive color.

What is saturation exactly?

Saturation is the purity of a color. The purer a color, the narrower a light source’s band of of wavelengths it occupies on the spectrum. Spectral colors are the most saturated colors possible and have only a single wave length. The least saturated colors are white and black and all the greys in between. White light is a balanced mix of all the colors that we can perceive: light waves from across the visible spectrum. Or simply by mixing red, green and blue in a clever way. Either way, the mix has to imitate the ratio in which the sun’s light triggers our color sensitive retinal cones.

Knowing how to make the most saturated and least saturated colors, you can create colors with medium saturation by adding white light to a saturated color.

A triangle with a mesh gradient. 100% Saturated at the top, white at the bottom left, black at the bottom right.

This triangle has all possible variants of this blue high. Vertically its saturation is varied. Horizontally its brightness.

Saturated colors only trigger two retinal cone types – their sensitivities overlap, so it’s impossible to trigger only one type. That means we are actually quite bad at assessing color saturation. We cannot see the difference between a spectral yellow and an even blend of several greens and reds, for instance. The only way we can perceive that an object has an unsaturated color (and thus reflects light containing many wavelengths) is when all our three color receptors for red, green and blue are activated by that color. So as long as one of our three cone types is not activated, we process it as if it were a pure, spectral color. The more light the third cone type is picking up additionally, the less saturated we consider the color.

Conclusion

Just by trying to define color we already have to go beyond the physical properties of light. Our retinas process wavelengths in a non-mathematical way, requiring a lot of interpretation. And I did not even get started with what our brains do with the signals they receive from our eyeballs! That’s for the next post.

Follow me on Twitter or Mastodon to get an update as soon as part 2 is ready.

Save

Save

Save

Save

Save

Save

Save

Save

Warning: these posts are pretty old

It’s been too long since I wrote something here. Some posts were so bad that I archived them and I’m quite sure today nobody is interested whether or not designers should code. Nor were they before. I’ll post something now in the next, let’s say month or so. Promise.

Should designers write code?

Should interaction designers write code? Many in our field seem to be occupied with this question. I decided to find out for myself. In this post I’ll share my experiences of the last three years combining design with code. Three years ago I only knew a bit of ActionScript, which – besides in the odd Flash banner – never made it to production. I’m now proficient with HTML, CSS and JQuery. Last year I worked half a year at Apple as a software engineer.

How coding changes the design process

Design without coding

The figure below is a rough interpretation of how designers work as soon as they have picked a problem to solve: ideas are sketched, good ideas are turned into detailed designs that are in the end handed to an engineer in the form of a specification. Regardless of the field (industrial, interaction, graphic design), the specification consists of a technical drawing and a formal description of the features.Design is not a linear process. Solving a problem usually takes several iterations of ideation, sketching and detailing. New ideas pop up and the designer becomes aware of issues that weren’t identified before. At several stages in the process, prototypes of low and high fidelity are made and tested.

Idea

Sketch

Detail design

Design

Development

These used to be my basic tools in the design process. Of course in reality, the process is not this linear.

This process isn’t unique for interaction design. You can just as well replace ‘Development’ with ‘Printing’, ‘Injection molding’, etc. Except for artistic and artisanal designers, engineers and machines take the next steps to turn the detail design into a product.

Design with coding

Nowadays I often skip the step of making detail designs and design specifications, because I deliver (part of) the front-end to the developers in my team.

Idea

Sketch

Detail design

Front-end coding

Design

Development

In projects for which I do front-end development too, pixel mockups aren’t always necessary.

Things I’ve learned as a coding designer

My code may not be great, but it only has to be good.

A programmer with good taste doesn’t make a designer. Just like that, me fancying to write a few lines of JavaScript every now and then, doesn’t make me a computer scientist.

Not all coding is programming though. HTML and CSS only define the appearance of more or less static elements of a web page. They’re just design specifications that are understood by computers. I think therefore it’s only natural for web designers to write HTML and CSS themselves. You don’t need to be a programmer to do that really well.

Interfaces are more than static layouts, so it’s necessary for designers to define how the interface changes based on events, like a user clicking a certain button. When designers start writing code to define how such interactions should be handled, they enter the realm of software development. There, code can output unexpected results and just completely fail to execute. This is where developers may get suspicious about the quality of my work.

I may not write the most efficient JavaScript, but it’s understandable for anyone familiar with that language. Stuff that happens in front-end is usually relatively easy to comprehend, because the results are so visible. Because of that, a short review of my scripts will suffice for a developer to identify potential issues. Except for high profile projects, the efficiency and beauty of code are not likely to be noticed by anyone but those in the development team. It doesn’t matter if the code is well-written or not, as long as it works.

Doesn’t going straight from idea to code compromise design quality?

Quality of code may not have to an issue when designers start coding, but what about the quality of design? Isn’t it a terribly bad idea to just implement first ideas without considering and testing alternatives, like mentioned above?

Generally, I think it’s stupid to go straight from idea to code. A complete design is way too complex to create just in the mind and then execute without the help of sketches and models. There is a scenario where it makes sense though. When parts of a design have been implemented in a front-end, it’s very easy to make changes. What would happen if I’d change the button colors, or if I’d increase the white space? Such questions are easy to answer by changing a few lines of CSS. Changing a pixel mockup takes more time than that. If designers can’t make such changes themselves, a lot of time is spent on creating mockups and communicating changes to a developer. Developers seem not be especially keen on getting such mundane change requests.

So, as long as designers don’t forget they’re designers first and coders second, editing code is a great way to get the details right in an interface.

Better team collaboration

When I didn’t write CSS

Designer: Can you please change the line heights? Developer: Maybe after I've finished some crazy complicated computer things you don't understand, but are very important.

Now I do write CSS

Designer: I have just committed the new CSS. Developer: Thanks.

Once I delivered a visual design as an HTML mockup. The front-end developer who received it was all like “WTF is this shit!?” and wanted Photoshop files as a reference instead. I still don’t know why. No Photoshop file would have shown the animations I had included. Even communicating what parts of a responsive layout are fixed size and what parts scale is awkward in a set of images. Maybe this developer was just a bit territorial about his field of expertise. Totally unnecessary though, he was experienced and there’s no way any designer would replace his job on the projects he worked on.

That incident was an exception, luckily. Not everyone may feel comfortable with a shift of responsibilities, but more overlap in understanding of concepts does make collaboration easier. I think engineers have a natural interest in technical problems, where designers are more interested in user experience. That always leads to discussions about priorities. Knowing how the code of a front-end works, these discussions now go beyond pushing for our own stakes in the project. We identify issues and come up with solutions together. Peace.

Maybe I care a bit too much for engineering issues now

My better understanding of engineering issues comes with a trade-off. When discussing ideas for changes, I’ve caught myself more than once with the developer’s attitude of “No. Then I wrote all that fine code in vain!” That’s bad. As a designer I should be arguing the benefits for the users instead! For this reason, it’d be good to have more than one designer on the team, with one who’s not writing code at all.

I hope being aware of this pitfall helps, but I’m pretty sure it affects me on a subconscious level too, favoring ideas that are easier to build with my limited dev skills.

Prototyping

I think the drawback of having designers code mentioned above is outweighed by their ability to create realistic prototypes. How much time do designers spend creating advanced prototypes with Axure, Fireworks and the like? Isn’t it a waste, if all that work has to be redone to make it work in the actual product? If coding designers can make prototypes more efficiently because part of it can be used in the implementation of the product, more prototypes can be built and tested early in the process. This should reduce the need for expensive changes later on.

I may be a slow coder, but writing a design spec is a total waste of time if you can avoid it.

Compared to specialized front-end developers, I’m rather slow writing code. But being capable to do it, I don’t have to write lengthy documents with specifications anymore. I deliver a front-end instead.

Also, I don’t waste any time creating those so-called pixel-perfect Photoshop files. I just spend less time getting a design from concept to detail design, which used to be a huge chunk of my work.Anyone who believes a pixel perfect psd is essential in the development of a web-based product, better changes their career to embroidery. As long as the big browsers are not capable of displaying web pages consistently, I don’t see the point of have a perfect execution of the design that no user will ever see.

Best of all, I make fewer change requests after assessing a design implementation. That old workflow with dumping a design spec in a developer’s inbox is prone to errors. Even simple web apps contain a few thousands lines of CSS, based on a document with dozens of pages with specs. So even if I’d always write complete (no) and flawless (not likely) specs in one run, the developer may overlook parts or just interpret things differently.

Even if my code were only used by developers to see what the design should look like in the front end, I’ve used the clearest and most efficient way to create a design spec.

Conclusion: learn to code, do what you like

We need specialists in every aspect of design: usability research, ideation, brand experience, you name it. A big organization with a big project may benefit from teams of experts – if it can afford it. In the small to midscale projects with only a few designers and developers, compared to those who stay away from code, coding designers deliver more, collaborate better and create higher quality products. To designers of interactive products, I recommend to get familiar with code at least to the level they feel comfortable discussing technical issues with engineers.

How I design ‘mobile first’

I never found designing websites for mobiles really attractive, compared to designing for big displays. Pretty much any design gets better when you add:

  1. Lots of white space
  2. Contrast in size

Since you don’t have that luxury to apply that on little smart phone displays, what’s the fun in designing for mobile? After designing and building this website the mobile first way, I realized it’s all about creating a good experience for visitors on any device.  It’s more work than it used to be when 1024 x 768 pixels was the standard, but it sure is rewarding to see a design fluently adapt to the size of its window and work perfectly on any device. Doing it mobile first helps to achieve that.

What is mobile first?

Strangely, even in Luke Wroblewski’s Mobile First book I can’t find a definition of the term ‘mobile first’. To me it simply means starting design and development of web content for small devices first and enhancing it for larger screens later. With that come some requirements, but also opportunities that can greatly improve the user experience.

How I do mobile first

Message first

Designing for mobile makes you focus on what’s most important on each page. Forget  slideshows, mouse hover effects and banners. Put the core info on top and make sure it’s visible at least partly when the page is loaded. I’m not talking about reintroducing the fold, but about immediately giving website visitors the feeling they get what they’re looking for.

Example

On a mobile device, each of my project pages shows a larger version of the thumbnail from the project overview. I assume that when people click the thumbnail, they want to see the design up close, so I offer that immediately. From a layout perspective, that’s not really nice, so visitors with a large enough display get that image a bit lower on the page.

Depending on your device, you will see the image before or after the project details.

Depending on your device, you will see the image either before or after the project details.

Basic styling first

You don’t get to make complicated layouts on a small screen, so typography and color play a much more important role than on a large display. On any display size, these two have to be right, because bad typography and an ugly color scheme can’t be compensated by some cool layout. Designing mobile first forces you to get the basic styling right without getting distracted by details and fun features.

Example

One of the first things I did to create a style fitting the requirements for this website, was picking fonts and a color scheme. These were applied in a base stylesheet, working best at a small window size: less than 480 pixels wide. I optimized the character size and column width for readability: ten to twelve words per line. I may have failed to get that right, if I had started with a larger, more complicated layout where more factors have to be considered.

Simplicity first

Continuing on the previous example: yes, I like drawing nice, complicated grid layouts with lots of side bars, widgets and whatnot. But if the message comes across clearer without that stuff on a mobile, why would you need it on a desktop computer? For this website I started with a single column layout for mobile devices. I wrote the CSS and tested it in a narrow browser window on my laptop. When things looked ok, I made my browser window wider to the point where things didn’t look ok anymore. Then, for that window width, I added CSS adjustments for character sizes and margins.

Example

Dragging my browser window wider, at a window width of around 600 pixels, some things started to look off. I noticed that the buttons of the top navigation menu got huge. For mobile, they’re set to  50% window width, which obviously doesn’t make sense for larger displays. I created a break point with a CSS media query at the point where I was sure the labels in the buttons would fit in a single row and set the width of the buttons to 25% there.

Step by step, I repeated this cycle of increasing window width and adding styling. Some text was put in columns, like the details in project descriptions. I found there was no need to add sidebars and widgets at larger window. Had I started sketching layouts for large displays, I may have come up with ideas (side bars, mega menus, etc) that were much more work to implement.

Mobile CSS first

Mobile first makes as much sense for writing code as for the design, because the layouts for large screens tend to be more complicated than the single column designs on a mobile. As mentioned, designing for small devices first and scaling up later, may lead to the realization there’s no need for a complicated grid layout at all. That definitely cuts time writing CSS for it.

That said, creating a fully responsive website takes way more time writing and testing than one that’s optimized for a single, fixed screen size.

Example

My CSS,  based on the Bones WordPress theme, is organized as a base stylesheet, to which styling for larger screens is added progressively with media queries. This is good because:

  1. The CSS for small screens stays light, and gets heavier for larger screens that generally come with computers with more processing power and memory.
  2. The CSS is clean, because the extra styling for big screens  is separated from the base style sheet that is applied to all devices.
  3. It’s intuitive to write. You don’t have to override the complex styling for big displays by applying higher CSS specificity for all attributes that need to be changed.

Conclusion

I’m glad I didn’t discard mobile first to the buzz word pile. The experiences I described above, convinced me that designers, developers and product owners alike can benefit from making the shift to mobile first.

New design pattern: Drag To Tag

Working on a new platform for keeping annotations, I came up with a new way of tagging items! In many applications, tagging is used to make items, such as photos, bookmarks or songs findable. I guess many people like myself find proper tagging tedious at times. It’s an administrative job, something you may skip if you’re not sure if you need it in the future.

Bookmark tagging window in Firefox

When you tag a bookmark in Firefox, you edit/add an item and add keywords by typing them into a text field.

On top of that for many people, typing on a mobile device is rather inconvenient. So why don’t we skip the part of adding a new bookmark and going straight to adding tags, without actually having to type anything? ‘Drag To Tag’ is my idea for a new design pattern to assign tags. Just drag the tag symbol to a word in the text:

New design pattern for tagging words on, say, a web page

Drag the tag to the words that are typical for the webpage to tag them. No typing!

The tags that are visualized in the text could be made resizeable to create tags longer than a single word. Of course we shouldn’t get rid of typing altogether: users may want to add tags that don’t occur in the text after all.

A similar pattern would be selecting a word and clicking/tapping the tag button, which would be especially handy for tags longer than a single word.

Drag To Tag has several benefits over traditional tagging:

  1. No typing required
  2. Fewer user actions required (you can even skip the ‘add bookmark’ action: when a tag is added, a bookmark is created when it doesn’t exist yet for the page)
  3. It’s visual thus less nerdy

Though it looks so simple, I haven’t seen Drag To Tag in any application yet. Are there any drawbacks I’m overlooking, from a design perspective?