I’m comparing screen design tools to find out which is best for what. I thought it’d be interesting to collect what I’m missing in them and would like to see in future tools.
1. Design user interfaces, not screens
My concern with the mainstream UI design tools is that they all use static screens as the main unit of interaction design. Even in the one that does interactivity best, Adobe XD, prototyping options are very limited. Screen design tools can be combined with prototyping tools like Invision, Framer or Principle, but the separation of the 2D design and what happens in the time dimension just does not make sense to me.
Seeing screens side-by-side can be very helpful to present a flow. It’s also a good way to compare alternatives. But this is such a small bit of all the design I do, that I’m not sure why this is such a central part of these tools.
Screens designs are mere examples of the states of the several software components that are on them. In an actual product, those states don’t follow a linear path. A user can for example add an item to a todo list or open the sharing options to review who else has access to it. And do that in any order they want. So for usability testing, it’s often desirable to not force test users on a linear path. But with the static screen approach, designers have to manually create a screen for every possible step that can occur in a test. A simple flow of 3 steps with 5 options for each already requires 15 screens that have to be equipped with interactivity.
Arguably, screen design is a design discipline. But no self-respecting designer calls themselves that. For years designers have been complaining about others not understanding that design is more than making pretty images. We’re all UXers, experience designers, or just designers, right? I would like it so much if we wouldn’t have to separate the visual design from interaction, logic, timing and animation.
2. Design with variables and components
Instead of creating individual screens, I want to create component-based design systems. I want to:
- Define style guides with basic variables, like colors, fonts, sizes and transition properties
- Design components that are styled based on that style guide
- Design components that can vary in size and can be animated
- Create responsive layouts and prototypes with the components
The prototypes should allow test users to interact with the components. Based on user input, the prototype should react with changes, be it in size, style or content.
3. Consider the larger design process
My favorite design tool is Realtimeboard. It’s an endless online canvas on which the whole team can put sticky notes, text, comments and pictures. You can also add simple shapes like boxes and arrows. Nothing fancy and that makes it great, because anyone can learn to use it in minutes. Its simplicity also makes it versatile. We use it to share research insights, draw user journeys and wireframes and some task management via the comments. Not just designers are on it. We also use it to visualize business stuff, to make technical diagrams and do early copywriting.
I use Realtimeboard’s Sketch plugin to import screens from Sketch and sync any changes. That way we have user journeys with current designs available to the whole team. I can imagine a tool like Realtimeboard where prototypes are created in the same environment we use for research synthesis and task management.
I understand that to make successful products, one has to start with clear, concise goals. It’s also understandable that the screen design tools started with filling a niche. But by now we have so many designers who act as if design starts and stops with styling standard components. If design is solving problems for users and society, then design tools should be problem solving tools.
4. The design should be the spec
No one likes translating designs to a specification that engineers can read. Current screen design tools allow engineers to quickly get the visual specs of elements. None of them does a good job at including descriptions of behavior of components though. If only we could add interactivity and behavior descriptions to our screens!
What I look forward to
I’m very curious about a tool coming out soon: Phase. It promises to be component-based, supporting states. Framer X also looks promising, but I haven’t had time to find out if it’s more than screen design + code.
There is a lot of software attempting to turn design into code and vice versa. Recently my engineering colleagues had a good look at that. It turned out (sidenote: Some of he code-to-design sofware seemed to work well. Code-to-design can make sense when a design system is in place and many designers need access to its components. That’s not relevant to the work that I do, where I’m involved in the early stages of organizations and their products. The software turning design into code is limited, forcing designs into standard patterns. None of it produced high quality code. Typically, it’s HTML without semantics other than divs and spans. ) to us. It’s a start though. It shouldn’t be long before we’ll get design software to make almost real products!
Update August 2019
Based on what I read on their Slack, Phase is still heavily under development. They’re beta testing and use Figma as an assets editor.
In July 2019 Tom Johnson wrote a post I agree with. His main point is that design tools need the box model. The post ends with a list of tools thinks ‘are starting to solve these problems’.