Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/ Recent content in Articles on Smashing Magazine — For Web Designers And Developers Tue, 30 Jan 2024 09:35:42 GMT https://validator.w3.org/feed/docs/rss2.html manual en Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/images/favicon/app-icon-512x512.png https://www.smashingmagazine.com/ All rights reserved 2024, Smashing Media AG Development Design UX Mobile Front-end <![CDATA[The Feature Trap: Why Feature Centricity Is Harming Your Product]]> https://smashingmagazine.com/2024/01/feature-centricity-harming-product/ https://smashingmagazine.com/2024/01/feature-centricity-harming-product/ Mon, 29 Jan 2024 15:00:00 GMT Most product teams think in terms of features. Features are easy to brainstorm and write requirement docs for, and they fit nicely into our backlogs and ticketing systems. In short, thinking in terms of features makes it easy to manage the complex task of product delivery.

However, we know that the best products are more than the sum of their parts, and sometimes, the space between the features is as important as the features themselves. So, what can we do to improve the process?

The vast majority of product teams are organized around delivering features — new pieces of functionality that extend the capabilities of the product. These features will often arise from conversations the company is having with prospective buyers:

  • “What features are important to you?”
  • “What features are missing from your current solution?”
  • “What features would we need to add in order to make you consider switching from your existing provider to us?” and so on.

The company will then compile a list of the most popular feature requests and will ask the product team to deliver them.

For most companies, this is what customer centricity looks like; asking customers to tell them what they want — and then building those features into the product in the hope they’ll buy — becomes of key importance. This is based on the fundamental belief that people buy products primarily for the features so we assemble our roadmaps accordingly.

We see this sort of thinking with physical products all the time. For instance, take a look at the following Amazon listing for one of the top-rated TV sets from last year. It’s like they hurled up the entire product roadmap directly onto the listing!

Now, of course, if you’re a hardcore gamer with very specific requirements, you might absolutely be looking for a TV with “VRR, ALLM, and eARC as specified in HDMI2.1, plus G-Sync, FreeSync, Game Optimizer, and HGiG.” But for me? I don’t have a clue what any of those things mean, and I don’t really care. Instead, I’ll go to a review site where they explain what the product actually feels like to use in everyday life. The reviewers will explain how good the unboxing experience is. How sturdy the build is. How easy it is to set up. They’ll explain that the OS is really well put together and easy to navigate, the picture quality is probably the best on the market, and the sound, while benefiting from the addition of a quality sound bar, is very clear and understandable. In short, they’ll be describing the user experience.

The ironic thing is that when I talk to most founders, product managers, and engineers about how they choose a TV, they’ll say exactly the same thing. And yet, for some reason, we struggle to take that personal experience and apply it to our own users!

Tip: As a fun little trick, next time you find yourself arguing about features over experience, ask people to get out their phones. I bet that the vast majority of folks in the room will have an iPhone, despite Samsung and Google phones generally having better cameras, more storage, better screens, and so on. The reason why iPhones have risen in dominance (if we ignore the obvious platform lock-in) is because, despite perhaps not having the best feature set on the market, they feel so nice to use.

Seeing Things From The Users’ Perspective

While feature-centric thinking is completely understandable, it misses a whole class of problems. The features in and of themselves might look good on paper and work great in practice, but do they mesh together to form a convincing whole? Or is the full experience a bit of a mess?

All the annoying bumps, barriers, and inconsistencies that start accruing around each new feature, if left unsolved, can limit the amount of value users can extract from the product. And if you don’t effectively identify and remove these barriers in a deliberate and structured way, any additional functionality will simply add to the problem.

If users are already struggling to extract value from existing features, how do you expect them to extract any additional value you might be adding to the product?

“As a product manager, it’s natural to want to offer as many features as possible to your customers. After all, you want to provide value, right? But what happens when you offer too many features? Your product becomes bloated, convoluted, and difficult to use.”
— “Are Too Many Features Hurting Your Product?

These barriers and inconsistencies are usually the result of people not thinking through the user experience. And I don’t mean user experience in some abstract way. I mean literally walking through the product step-by-step as though you’d never seen it before — sometimes described as having a “beginner’s mind” mdash; and considering the following questions:

  • Is it clear what value this product delivers and how I can get that value?
  • If I were a new user, would the way the product is named and structured make sense to me?
  • Can I easily build up a mental model of where everything is and how the product works?
  • Do I know what to do next?
  • How is this going to fit into my existing workflow?
  • What’s getting in my way and slowing me down?

While approaching things with a beginner’s mind sounds easy, it’s actually a surprisingly hard mindset for people to adopt — letting go of everything they know (or think they know) about their product, market, and users. Instead, their position as a superuser tends to cloud their judgment: believing that because something is obvious to them (something that they have created and have been working on for the past two years), it will be obvious to a new user who has spent less than five minutes with the product. This is where usability testing (a UX research method that evaluates whether users are able to use a digital product efficiently and effectively) should normally “enter the stage.”

The issue with trying to approach things with a beginner’s mind is also often exacerbated by “motivated reasoning,” the idea that we view things through the lens of what we want to be true, rather than what is true. To this end, you’re much more likely to discount feedback from other people if that feedback is going to result in some negative outcome, like having to spend extra time and money redesigning a user flow when you’d rather be shipping that cool new feature you came up with last week.

I see this play out in usability testing sessions all the time. The first subject comes in and struggles to grasp a core concept, and the team rolls their eyes at the incompetence of the user. The next person comes in and has the same experience, causing the team to ask where you found all these stupid users. However, as the third, fourth, and fifth person comes through and experiences the same challenge, “lightbulbs” slowly start forming over the team members’ heads:

“Maybe this isn’t the users’ fault after all? Maybe we’ve assumed a level of knowledge or motivation that isn’t there; maybe it’s the language we’ve used to describe the feature, or maybe there’s something in the way the interface has been designed that is causing this confusion?”

These kinds of insights can cause teams to fundamentally pivot their thinking. But this can also create a huge amount of discomfort and cognitive dissonance — realizing that your view of the world might not be entirely accurate. As such, there’s a strong motivation for people to avoid these sorts of realizations, which is why we often put so little effort (unfortunately) into understanding how our users perceive and use the things we create.

Developing a beginner’s mind takes time and practice. It’s something that most people can cultivate, and it’s actually something I find designers are especially good at — stepping into other people’s shoes, unclouded by their own beliefs and biases. This is what designers mean when they talk about using empathy.

Towards A Two-Tier Process (Conclusion)

We obviously still need to have “feature teams.” Folks who can understand and deliver the new capabilities our users request (and our business partners demand). While I’d like to see more thought and validation when it comes to feature selection and creation, it’s often quicker to add new features to see if they get used than to try and use research to give a definitive answer.

As an example, I’m working with one founder at the moment who has been going around houses with their product team for months about whether a feature would work. He eventually convinced them to give it a try — it took four days to push out the change, and they got the feedback they needed almost instantly.

However, as well as having teams focused on delivering new user value, we also need teams who are focused on helping unlock and maximize existing user value. These teams need to concentrate on outcomes over outputs; so, less deliver X capability in Y sprints than deliver X improvement by Y date. To do this, these teams need to have a high level of agency. This means taking them out of the typical feature factory mindset.

The teams focusing on helping unlock and maximize existing user value need to be a little more cross-disciplinary than your traditional feature team. They’re essentially developing interventions rather than new capabilities — coming up with a hypothesis and running experiments rather than adding bells and whistles. “How can we improve the onboarding experience to increase activation and reduce churn?” Or, “How can we improve messaging throughout the product so people have a better understanding of how it works and increase our North Star metric as a result?”

There’s nothing radical about focusing on outcomes over outputs. In fact, this way of thinking is at the heart of both the Lean Startup movement and the Product Led Growth. The problem is that while this is seen as received wisdom, very few companies actually put it into practice (although if you ask them, most founders believe that this is exactly what they do).

Put simply, you can’t expect teams to work independently to deliver “outcomes” if you fill their their calendar with output work.

So this two-tier system is really a hack, allowing you to keep sales, marketing, and your CEO (and your CEO’s partner) happy by delivering a constant stream of new features while spinning up a separate team who can remove themselves from the drum-beat of feature delivery and focus on the outcomes instead.

Further Reading

  • Why Too Many Features Can Ruin a Digital Product Before It Begins(Komodo Digital)
    Digital products are living, ever-evolving things. So, why do so many companies force feature after feature into projects without any real justification? Let’s talk about feature addiction and how to avoid it.
  • Are Too Many Features Hurting Your Product?(FAQPrime)
    As a product manager, it’s natural to want to offer as many features as possible to your customers. After all, you want to provide value, right? But what happens when you offer too many features? Your product becomes bloated, convoluted, and difficult to use. Let’s take a closer look at what feature bloat is, why it’s a problem, and how you can avoid it.
  • Twelve Signs You’re Working in a Feature Factory,” John Cutler
    The author started using the term Feature Factory when a software developer friend complained that he was “just sitting in the factory, cranking out features, and sending them down the line.” This article was written in 2016 and still holds its ground today. In 2019 there appeared a newer version of it (“Twelve signs You’re Working in a Feature Factory — Three Years Later”).
  • What Is The Agile Methodology?,” (Atlassian)
    The Agile methodology is a project management approach that involves breaking the project into phases and emphasizes continuous collaboration and improvement. Teams follow a cycle of planning, executing, and evaluating.
  • Problem Statement vs Hypothesis — Which ­­Is More Important?,” Sadie Neve
    When it comes to experimentation and conversion rate optimization (CRO), we often see people relying too much on their instincts. But in reality, nothing in experimentation is certain until tested. This means experimentation should be approached like a scientific experiment that follows three core steps: identify a problem, form a hypothesis, and test that hypothesis.
  • The Build Trap,” Melissa Perri (Produx Labs)
    The “move fast and break things” mantra seems to have taken the startup world by storm since Facebook made it their motto a few years ago. But there is a serious flaw with this phrase, and it’s that most companies see this as an excuse to stop analyzing what they intend to build and why they should build it — those companies get stuck in what I call “The Build Trap.”
  • What Is Product-led Growth?(PLG Collective)
    We are in the middle of a massive shift in the way people use and buy software. It’s been well over a decade since Salesforce brought software to the cloud. Apple put digital experiences in people’s pockets back in 2009 with the first iPhone. And in the years since the market has been flooded with consumer and B2B products that promise to meet just about every need under the sun.
  • The Lean Startup
    The Lean Startup isn’t just about how to create a more successful entrepreneurial business. It’s about what we can learn from those businesses to improve virtually everything we do.
  • Usability Testing — The Complete Guide,” Daria Krasovskaya and Marek Strba
    Usability testing is the ultimate method of uncovering any type of issue related to a system’s ease of use, and it truly is a must for any modern website or app owner.
  • "The Value of Great UX,” Jared Spool
    How can we show that a great user experience produces immense value for the organization? We can think of experience as a spectrum, from extreme frustration to delight. In his article, Jared will walk you through how our work as designers is able to transform our users’ experiences from being frustrated to being delighted.
  • Improving The Double Diamond Design Process,” Andy Budd (Smashing Magazine)
    The so-called “Double Diamond” is a great way of visualizing an ideal design process, but it’s just not the way most companies deliver new projects or services. The article proposes a new “Double Diamond” idea that better aligns with the way work actually gets done and highlights the place where design has the most leverage.
  • Are We Moving Towards a Post-Agile Age?,” Andy Budd
    Agile has been the dominant development methodology in our industry for a while now. While some teams are just getting to grips with Agile, others have extended it to the point that it’s no longer recognizable as Agile; in fact, many of the most progressive design and development teams are Agile only in name. What they are actually practicing is something new, different, and innately more interesting — something I’ve been calling Post-Agile thinking.
]]>
hello@smashingmagazine.com (Andy Budd)
<![CDATA[A Simple Guide To Retrieval Augmented Generation Language Models]]> https://smashingmagazine.com/2024/01/guide-retrieval-augmented-generation-language-models/ https://smashingmagazine.com/2024/01/guide-retrieval-augmented-generation-language-models/ Fri, 26 Jan 2024 15:00:00 GMT Suppose you ask some AI-based chat app a reasonably simple, straightforward question. Let’s say that app is ChatGPT, and the question you ask is right in its wheelhouse, like, “What is Langchain?” That’s really a softball question, isn’t it? ChatGPT is powered by the same sort of underlying technology, so it ought to ace this answer.

So, you type and eagerly watch the app spit out conversational strings of characters in real-time. But the answer is less than satisfying.

In fact, ask ChatGPT — or any other app powered by language models — any question about anything recent, and you’re bound to get some sort of response along the lines of, “As of my last knowledge update…” It’s like ChatGPT fell asleep Rumplestiltskin-style back in January 2022 and still hasn’t woken up. You know how people say, “You’d have to be living under a rock not to know that”? Well, ChatGPT took up residence beneath a giant chunk of granite two years ago.

While many language models are trained on massive datasets, data is still data, and data becomes stale. You might think of it like Googling “CSS animation,” and the top result is a Smashing Magazine article from 2011. It might still be relevant, but it also might not. The only difference is that we can skim right past those instances in search results while ChatGPT gives us some meandering, unconfident answers we’re stuck with.

There’s also the fact that language models are only as “smart” as the data used to train them. There are many techniques to improve language model’s performance, but what if language models could access real-world facts and data outside their training sets without extensive retraining? In other words, what if we could supplement the model’s existing training with accurate, timely data?

This is exactly what Retrieval Augmented Generation (RAG) does, and the concept is straightforward: let language models fetch relevant knowledge. This could include recent news, research, new statistics, or any new data, really. With RAG, a large language model (LLM) is able to retrieve “fresh” information for more high-quality responses and fewer hallucinations.

But what exactly does RAG make available, and where does it fit in a language chain? We’re going to learn about that and more in this article.

Understanding Semantic Search

Unlike keyword search, which relies on exact word-for-word matching, semantic search interprets a query’s “true meaning” and intent — it goes beyond merely matching keywords to produce more results that bear a relationship to the original query.

For example, a semantic search querying “best budget laptops” would understand that the user is looking for “affordable” laptops without querying for that exact term. The search recognizes the contextual relationships between words.

This works because of text embeddings or mathematical representations of meaning that capture nuances. It’s an interesting process of feeding a query through an embedded model that, in turn, converts the query into a set of numeric vectors that can be used for matching and making associations.

The vectors represent meanings, and there are benefits that come with it, allowing semantic search to perform a number of useful functions, like scrubbing irrelevant words from a query, indexing information for efficiency, and ranking results based on a variety of factors such as relevance.

Special databases optimized for speed and scale are a strict necessity when working with language models because you could be searching through billions of documents. With a semantic search implementation that includes test embedding, storing and querying high-dimensional embedding data is much more efficient, producing quick and efficient evaluations on queries against document vectors across large datasets.

That’s the context we need to start discussing and digging into RAG.

Retrieval Augmented Generation

Retrieval Augmented Generation (RAG) is based on research produced by the Meta team to advance the natural language processing capabilities of large language models. Meta’s research proposed combining retriever and generator components to make language models more intelligent and accurate for generating text in a human voice and tone, which is also commonly referred to as natural language processing (NLP).

At its core, RAG seamlessly integrates retrieval-based models that fetch external information and generative model skills in producing natural language. RAG models outperform standard language models on knowledge-intensive tasks like answering questions by augmenting them with retrieved information; this also enables more well-informed responses.

You may notice in the figure above that there are two core RAG components: a retriever and a generator. Let’s zoom in and look at how each one contributes to a RAG architecture.

Retriever

We already covered it briefly, but a retriever module is responsible for finding the most relevant information from a dataset in response to queries and makes that possible with the vectors produced by text embedding. In short, it receives the query and retrieves what it evaluates to be the most accurate information based on a store of semantic search vectors.

Retrievers are models in and of themselves. But unlike language models, retrievers are not in the business of “training” or machine learning. They are more of an enhancement or an add-on that provides additional context for understanding and features for fetching that information efficiently.

That means there are available options out there for different retrievers. You may not be surprised that OpenAI offers one, given their ubiquity. There’s another one provided by Cohere as well as a slew of smaller options you can find in the Hugging Face community.

Generator

After the retriever finds relevant information, it needs to be passed back to the application and displayed to the user. Or what’s needed is a generator capable of converting the retrieved data into human-readable content.

What’s happening behind the scenes is the generator accepts the embeddings it receives from the retriever, mashes them together with the original query, and passes through the trained language model for an NLP pass on the way to becoming generated text.

The entire tail end of that process involving the language model and NLP is a process in its own right and is something I have explained in greater detail in another Smashing Magazine article if you are curious about what happens between the generator and final text output.

RAG Full View

Pulling everything together, a complete RAG flow goes like this:

  1. A query is made.
  2. The query is passed to the RAG model.
  3. The RAG model encodes the query into text embeddings that are compared to a dataset of information.
  4. The RAG’s retriever decides the most relevant information with its semantic search abilities and converts it into vector embeddings.
  5. The RAG’s retriever sends the parsed embeddings to the generator.
  6. The generator accepts the embeddings and combines them with the original query.
  7. The generator passes its work off to the language model to produce natural-sounding content presented to the user.

LLM Hallucinations And Knowledge Limitations

We opened this article up by describing “hallucinations” in LLMs’ incorrect responses or something along the lines of “I don’t know, but here’s what I do know.” The LLM will “make stuff up” because it simply doesn’t have updated information to respond with.

Let’s revisit the first query we used to kick off this article — “What is LangChain?” — and compare responses from the Vicuna and GPT-4 language models:

Here’s the transcription for the second query using OpenAI’s GPT-4 for posterity:

“LangChain is a blockchain-based decentralized translation platform. It’s designed to connect translators and those seeking translation services directly without the need for intermediaries. This system uses artificial intelligence and blockchain technologies to provide high-quality, efficient translation services. LangChain also has a token-based economy, where users can earn and spend tokens within the LangChain ecosystem.”

Both Vicuna and GPT-4 refer to LangChain as a blockchain platform. Blockchain is a technology that stores data in a decentralized manner using chained blocks, so the models’ responses sound plausible given the “chain” in the name. However, LangChain is not actually a blockchain-based technology.

This is a prime example demonstrating how LLMs will fabricate responses that may seem believable at first glance but are incorrect. LLMs are designed to predict the next “plausible” tokens in a sequence, whether those are words, subwords, or characters. They don’t inherently understand the full meaning of the text. Even the most advanced models struggle to avoid made-up responses, especially for niche topics they lack knowledge about.

Let’s look at another example by querying: “What is the most preferred framework used by developers for building applications leveraging large language models?”

While Vicuna offers a couple of reasonable starting points for answering the question, the frameworks it refers to have limitations for efficiency and scalability in production-level applications that use LLMs. That could quite possibly send a developer down a bad path. And as bad as that is, look at the GPT-4 response that changes topics completely by focusing on LLVM, which has nothing to do with LLMs.

What if we refine the question, but this time querying different language models? This time, we’re asking: “What is the go-to framework developed for developers to seamlessly integrate large language models into their applications, focusing on ease of use and enhanced capabilities?”

Honestly, I was expecting the responses to refer to some current framework, like LangChain. However, the GPT-4 Turbo model suggests the “Hugging Face” transformer library, which I believe is a great place to experiment with AI development but is not a framework. If anything, it’s a place where you could conceivably find tiny frameworks to play with.

Meanwhile, the GPT-3.5 Turbo model produces a much more confusing response, talking about OpenAI Codex as a framework, then as a language model. Which one is it?

We could continue producing examples of LLM hallucinations and inaccurate responses and have fun with the results all day. We could also spend a lot of time identifying and diagnosing what causes hallucinations. But we’re here to talk about RAG and how to use it to prevent hallucinations from happening in the first place. The Master of Code Global blog has an excellent primer on the causes and types of LLM hallucinations with lots of useful context if you are interested in diving deeper into the diagnoses.

Integrating RAG With Language Models

OK, so we know that LLMs sometimes “hallucinate” answers. We know that hallucinations are often the result of outdated information. We also know that there is this thing called Retrieval Augmented Generation that supplements LLMs with updated information.

But how do we connect RAG and LLMs together?

Now that you have a good understanding of RAG and its benefits, we can dive into how to implement it yourself. This section will provide hands-on examples to show you how to code RAG systems and feed new data into your LLM.

But before jumping right into the code, you’ll need to get a few key things set up:

  • Hugging Face
    We’ll use this library in two ways. First, to choose an embedding model from the model hub that we can use to encode our texts, and second, to get an access token so we can download the Llama-2 model. Sign up for a free Hugging Face in preparation for the work we’ll cover in this article.
  • Llama-2
    Meta’s powerful LLM will be our generator model. Request access via Meta’s website so we can integrate Llama-2 into our RAG implementation.
  • LlamaIndex
    We’ll use this framework to load our data and feed it into Llama-2.
  • Chroma
    We’ll use this embedding database for fast vector similarity search and retrieval. This is actually where we can store our index.

With the key tools in place, we can walk through examples for each phase: ingesting data, encoding text, indexing vectors, and so on.

Install The Libraries

We need to install the RAG libraries we identified, which we can do by running the following commands in a new project folder:

# Install essential libraries for our project
!pip install llama-index transformers accelerate bitsandbytes --quiet
!pip install chromadb sentence-transformers pydantic==1.10.11 --quiet

Next, we need to import specific modules from those libraries. There are quite a few that we want, like ChromaVectorStore and HuggingFaceEmbedding for vector indexing and embeddings capabilities, storageContext and chromadb to provide database and storage functionalities, and even more for computations, displaying outputs, loading language models, and so on. This can go in a file named app.py at the root level of your project.

## app.py

## Import necessary libraries
from llama_index import VectorStoreIndex, download_loader, ServiceContext
from llama_index.vector_stores import ChromaVectorStore
from llama_index.storage.storage_context import StorageContext
from llama_index.embeddings import HuggingFaceEmbedding
from llama_index.response.notebook_utils import display_response
import torch
from transformers import BitsAndBytesConfig
from llama_index.prompts import PromptTemplate
from llama_index.llms import HuggingFaceLLM
from IPython.display import Markdown, display
import chromadb
from pathlib import Path
import logging
import sys

Provide Additional Context To The Model

The data we will leverage for our language model is a research paper titled “Enhancing LLM Intelligence with ARM-RAG: Auxiliary Rationale Memory for Retrieval Augmented Generation” (PDF) that covers an advanced retrieval augmentation generation approach to improve problem-solving performance.

We will use the download_loader() module we imported earlier from llama_index to download the PDF file:

PDFReader = download_loader("PDFReader")
loader = PDFReader()
documents = loader.load_data(file=Path('/content/ARM-RAG.pdf'))

Even though this demonstration uses a PDF file as a data source for the model, that is just one way to supply the model with data. For example, there is Arxiv Papers Loader as well as other loaders available in the LlamaIndex Hub. But for this tutorial, we’ll stick with loading from a PDF. That said, I encourage you to try other ingestion methods for practice!

Now, we need to download Llama-2, our open-source text generation model from Meta. If you haven’t already, please set up an account with Meta and have your access token available with read permissions, as this will allow us to download Llama-2 from Hugging Face.

# huggingface api token for downloading llama2
hf_token = "YOUR Access Token"

To fit Llama-2 into constrained memory, like in Google Colab, we’ll configure 4-bit quantization to load the model at a lower precision.

quantization_config = BitsAndBytesConfig(
  load_in_4bit=True,
  bnb_4bit_compute_dtype=torch.float16,
  bnb_4bit_quant_type="nf4",
  bnb_4bit_use_double_quant=True,
)

Google Colab is where I typically do most of my language model experiments. We’re shrinking the language model down with that last snippet so it’s not too large for Colab to support.

Next, we need to initialize HuggingFaceLLM to set up our model for generating text on the fly:

llm = HuggingFaceLLM(
    model_name="meta-llama/Llama-2-7b-chat-hf", 
    tokenizer_name="meta-llama/Llama-2-7b-chat-hf",
    query_wrapper_prompt=PromptTemplate("<s> [INST] {query_str} [/INST] "),
    context_window=3900,
    model_kwargs={"token": hf_token, "quantization_config": quantization_config},
    tokenizer_kwargs={"token": hf_token},
    device_map="auto",
)

Note that what’s happening here is that we’ve defined our model (Llama-2), but a specific subset of it is designed for a chat UI (Llama-2-Chat). Even though this is what we are using for this article, you can also pick from other text generation models in the Hugging Face hub, such as Neural-chat from Intel.

Compare Outputs

OK, so we have the model loaded and configured. Let’s try it out by asking it questions that it could reasonably answer using the content we provided via the ARM-RAG PDF file to see how the model responds without any context. The following code is for displaying the text output without the PDF providing data to the model for comparison sake:

from IPython.display import HTML, display

# Assuming resp contains the response
resp = llm.complete("What is ARM-RAG?")

# Using HTML with inline CSS for styling (gray color, smaller font size)
html_text = f'<p style="color: #1f77b4; font-size: 14px;"><b>{resp}</b></p>'

Let’s see the response:

display(HTML(html_text))

ARM-RAG (ARM Register-Allocating Generator) is a register allocation algorithm used in compiler design. nobody knows who invented it, but it was first described in a 1995 paper by Mark H. T. Kim and Andrew Y. Chen. The ARM-RAG algorithm is a register-allocating generator, meaning it is a heuristic algorithm that takes a program as input and generates a register allocation plan for it. The goal of the algorithm is to assign registers to variables in a way that minimizes the number of spills (i.e., loads or stores to/from memory) while satisfying all the memory access constraints of the program. The ARM-RAG algorithm works by first partitioning the program into a set of basic blocks, and then iteratively allocating registers to variables within each basic block. The algorithm uses a cost function to evaluate the quality of each allocation, and it selects the allocation with the lowest cost. The cost function takes into account factors such as the distance between the variable and the current register, the distance between the variable and the next allocation, and the number of spills that would be required to allocate the variable. ARM-RAG

Obviously, without any context from the PDF file, the model generates a hallucinated response that describes some unrelated algorithm. To fix this, we need to provide relevant passages from the paper to ground Llama-2’s responses. We’ll encode the document into embeddings and index them for retrieval; then, when we query, we can feed LLama-2 relevant retrieved passages to steer it toward accurate, on-topic responses based on the contents of the PDF file.

First, we need to create a client to interact with our ChromaDB database and a new collection that will hold our vector index.

# create client and a new collection
chroma_client = chromadb.EphemeralClient()
chroma_collection = chroma_client.create_collection("firstcollection")

Then we need to set up the HuggingFaceEmbedding class with the specified model name for embedding the text into vectors:

# Load the embedding model
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5")

This initializes HuggingFaceEmbedding, passing the name of the pre-trained model we want to use, BAAI/bge-base-en-v1.5. There are other options, of course.

Now, we can set up the vector store and use it to index the embedded document vectors:

# set up ChromaVectorStore and load in data
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
index = VectorStoreIndex.from_documents(
  documents, storage_context=storage_context, service_context=service_context
)

This creates a ChromaVectorStore connected to our collection, defines the storage and service contexts, and generates a VectorStoreIndex from the loaded documents using the embedding model. The index is what allows us to quickly find relevant passages for a given query to augment the quality of the model’s response.

We should also establish a way for the model to summarize the data rather than spitting everything out at once. A SummaryIndex offers efficient summarization and retrieval of information:

summary_index = SummaryIndex.from_documents(documents, service_context=service_context)

Earlier, the model hallucinated when we queried it without the added context from the PDF file. Now, let’s ask the same question, this time querying our indexed data:

#Define your query
query="what is ARM-RAG?"

from llama_index.embeddings.base import similarity
query_engine =index.as_query_engine(response_mode="compact")
response = query_engine.query(query)
from IPython.display import HTML, display

# Using HTML with inline CSS for styling (blue color)
html_text = f'<p style="color: #1f77b4; font-size: 14px;"><b>{response}</b></p>'
display(HTML(html_text))

Here’s the output:

Final Response: Based on the context information provided, ARM-RAG is a system that utilizes Neural Information Retrieval to archive reasoning chains derived from solving grade-school math problems. It is an Auxiliary Rationale Memory for Retrieval Augmented Generation, which aims to enhance the problem-solving capabilities of Large Language Models (LLMs). The system surpasses the performance of a baseline system that relies solely on LLMs, demonstrating the potential of ARM-RAG to improve problem-solving capabilities.

Correct! This response is way better than the one we saw earlier — no hallucinations here.

Since we’re using the chat subset of the Llama-2 model, we could have a back-and-forth conversation with the model about the content of the PDF file with follow-up questions. That’s because the indexed data supports NLP.

chat_engine = index.as_chat_engine(chat_mode="condense_question", verbose=True)
response = chat_engine.chat("give me real world examples of apps/system i can build leveraging ARM-RAG?")
print(response)

This is the resulting output:

Querying with: What are some real-world examples of apps or systems that can be built leveraging the ARM-RAG framework, which was discussed in our previous conversation?
Based on the context information provided, the ARM-RAG framework can be applied to various real-world examples, including but not limited to:

1. Education: ARM-RAG can be used to develop educational apps that can help students learn and understand complex concepts by generating explanations and examples that can aid in their understanding.

2. Tutoring: ARM-RAG can be applied to tutoring systems that can provide personalized explanations and examples to students, helping them grasp difficult concepts more quickly and effectively.

3. Customer Service: ARM-RAG can be utilized in chatbots or virtual assistants to provide customers with detailed explanations and examples of products or services, enabling them to make informed decisions.

4. Research: ARM-RAG can be used in research environments to generate explanations and examples of complex scientific concepts, enabling researchers to communicate their findings more effectively to a broader audience.

5. Content Creation: ARM-RAG can be applied to content creation systems that can generate explanations and examples of complex topics, such as news articles, blog posts, or social media content, making them more engaging and easier

Try asking more questions! Now that the model has additional context to augment its existing dataset, we can have a more productive — and natural — interaction.

Additional RAG Tooling Options

The whole point of this article is to explain the concept of RAG and demonstrate how it can be used to enhance a language model with accurate and updated data.

Chroma and LlamaIndex were the main components of the demonstrated RAG approach, but there are other tools for integrating RAG with language models. I’ve prepared a table that outlines some popular options you might consider trying with your own experiments and projects.

RAG Type of System Capabilities Integrations Documentation / Repo
Weaviate Vector Database Vector & Generative search LlamaIndex, LangChain, Hugging Face, Cohere, OpenAI, etc.
Pinecone Vector Database Vector search, NER-Powered search, Long-term memory OpenAI, LangChain, Cohere, Databricks
txtai Embeddings Database Semantic graph & search, Conversational search Hugging face models
Qdrant Vector Database Similarity image search, Semantic search, Recommendations LangChain, LlamaIndex, DocArray, Haystack, txtai, FiftyOne, Cohere, Jina Embeddings, OpenAI
Haystack Framework QA, Table QA, Document search, Evaluation Elasticsearch, Pinecone, Qdrant, Weaviate, vLLM, Cohere
Ragchain Framework Reranking, OCR loaders Hugging Face, OpenAI, Chroma, Pinecone
metal Vector Database Clustering, Semantic search, QA LangChain, LlamaIndex
Conclusion

In this article, we examined examples of language models producing “hallucinated” responses to queries as well as possible causes of those hallucinations. At the end of the day, a language model’s responses are only as good as the data it provided, and as we’ve seen, even the most widely used models consist of outdated information. And rather than admit defeat, the language model spits out confident guesses that could be misconstrued as accurate information.

Retrieval Augmented Generation is one possible cure for hallucinations.

By embedding text vectors pulled from additional sources of data, a language model’s existing dataset is augmented with not only new information but the ability to query it more effectively with a semantic search that helps the model more broadly interpret the meaning of a query.

We did this by registering a PDF file with the model that contains content the model could use when it receives a query on a particular subject, in this case, “Enhancing LLM Intelligence with ARM-RAG: Auxiliary Rationale Memory for Retrieval Augmented Generation.”

This, of course, was a rather simple and contrived example. I wanted to focus on the concept of RAG more than its capabilities and stuck with a single source of new context around a single, specific subject so that we could easily compare the model’s responses before and after implementing RAG.

That said, there are some good next steps you could take to level up your understanding:

  • Consider using high-quality data and embedding models for better RAG performance.
  • Evaluate the model you use by checking Vectara’s hallucination leaderboard and consider using their model instead. The quality of the model is essential, and referencing the leaderboard can help you avoid models known to hallucinate more often than others.
  • Try refining your retriever and generator to improve results.

My previous articles on LLM concepts and summarizing chat conversations are also available to help provide even more context about the components we worked with in this article and how they are used to produce high-quality responses.

References

]]>
hello@smashingmagazine.com (Joas Pambou)
<![CDATA[CSS Blurry Shimmer Effect]]> https://smashingmagazine.com/2024/01/css-blurry-shimmer-effect/ https://smashingmagazine.com/2024/01/css-blurry-shimmer-effect/ Thu, 25 Jan 2024 15:00:00 GMT Imagine box-shadow but for a blur effect, where the backdrop of an element is blurred around that element, gradually decreasing the blur’s strength. I came up with the idea while trying to improve the contrast of a popup over a dark area where a box-shadow for the popup won’t make much sense, design-wise. I then thought, well, what other ways might create a good contrast effect? And so suddenly, the idea of a gradual blur effect around the object came to me.

See the Pen Faded Outer Box Backdrop Blur [forked] by Yair Even Or.

It would be awesome if we had a box-blur property or perhaps some sort of blur keyword we could set on box-shadow the way we do for inset shadows. Unfortunately, CSS has no such property. But because CSS is awesome and flexible, we can still get the effect by combining a few CSS features and hack it through.

What I’m going to show you from here on out is the thought process I took to create the effect. Sometimes, I find it easier to know what’s coming up rather than meandering through a narrative of twists and turns. So, for those of you who are like me and want to jump straight into the process, this was my approach.

Start With The Markup

The effect is approached in a way that it is applied to the ::before pseudo-element of some element, say some popup/dialog/popover/tooltip. Those are the common “targets” for this sort of effect. I think using a pseudo-element is a good approach here because it means we could technically scope the styles to the pseudo-element and re-purpose the effect on other elements without any HTML changes.

<!-- This is literally it for this demo -->
<div></div>

You can give the element a class, whatever dimensions you like, insert content and other child elements within it, or use a completely different element. The HTML isn’t the main ingredient for the secret sauce we’re making.

Position The Pseudo-Element

We want the ::before pseudo-element to occupy the entire area of the <div> element we’re using for this specific demo. Not only do we want it to cover the entire area, but even overflow it because that establishes the visible area, which holds the blur effect, so it will extend outwards.

::before {
content: ''; /* Make sure the parent element is at least relatively positioned to contain the pseudo-element. */ position: absolute; /* The blur size should be anything below 0 so it will extend to the outside. */ inset: -100px; /* This layer is positioned between the parent element and page background. */ /* Make sure this value is one below the z-index of the parent element. */ z-index: -1; }

The code comments spell out the key pieces. An empty string has to be set for the content property so the ::before will be rendered, then we take it out of the document flow by giving it absolute positioning. This allows us to inset the element’s position and is ultimately setting the blur effect directions as we would on the box-shadow property — only we’re using inset to control its size. We want a negative inset value, where the effect extends further the lower the value gets.

Until now, we’ve set the foundation for the effect. There’s nothing really to see just yet. Now, the fun begins!

Masking With Transparent Gradients

Gradients are technically images — generated by the browser — which can be used as CSS masks to hide parts of an element to create various shapes. You may have seen a few related Smashing Magazine articles where CSS masking has been showcased, such as this one by Temani Afif.

Transparency is the key thing when it comes to masking with gradients. Transparency allows us to gradually hide portions of an element in a way that creates the illusion of fading in or out.

That’s perfect in this case because we want the effect to be stronger, closer to the object, and fade in intensity as it gets further away.

We’ll use two gradients: one that goes horizontally and another that goes vertically. I chose this route because it mimics a rough rectangle shape that fades out towards the edges.

As I said, transparency is key. Both gradients start transparent, then transition to black until just before the end, where they go back to transparent to fade things out. Remember, these gradients are masks rather than background images, so they are declared on the mask property, which controls which pixels should be rendered and their opacity.

mask:
  linear-gradient(to top, transparent 0%, black 25% 75%, transparent 100%),
  linear-gradient(to left, transparent 0%, black 25% 75%, transparent 100%);

See the Pen Basic Gradient Mask [forked] by Yair Even Or.

  • The vertical gradient (to top) creates a fade from transparent at the bottom to black in the middle, then back to transparent at the top.
  • The horizontal gradient (to left) produces a fade from transparent on the right to black in the middle, then back to transparent on the left.

This dual-gradient approach positions the black regions, so they merge, creating the rough baseline of a rectangular shape that will be refined in the next step. The mask property is best declared as first prefixed and then un-prefixed to cover more browsers’ support:

-webkit-mask:
  linear-gradient(to top, transparent 0%, black 25% 75%, transparent 100%),
  linear-gradient(to left, transparent 0%, black 25% 75%, transparent 100%);
mask:
  linear-gradient(to top, transparent 0%, black 25% 75%, transparent 100%),
  linear-gradient(to left, transparent 0%, black 25% 75%, transparent 100%);
Refining Using The mask-composite Property

The mask-composite property is part of the CSS Masking Module and enables pixel-wise control over the blending of masked content, allowing for intricate compositions.

The source-in value of this property is very useful for the effect we are after because it tells the browser to only retain the overlapping areas of the mask, so only pixels that contain both (mentioned above) gradients will get rendered. This locks in a rectangle shape, which can then be applied on any DOM element that has none-to-moderately curved corners (border-radius).

Gradually Blurring The Backdrop

Now that we have a mask to work with, all we need to do is use it. The backdrop-filter CSS property can blur anything that is rendered “behind” an element using the blur() function:

::before {
  /* etc. */

  backdrop-filter: blur(10px);
}

The larger the value, the more intense the blur. I’m using 10px arbitrarily. In fact, we can variablize this stuff later to make the implementation even more flexible and easily configurable.

But wait! As it turns out, Safari requires a vendor-prefixed version of backdrop-filter to get it working there:

::before {
  /* etc. */

  -webkit-backdrop-filter: blur(10px); /* Required for Safari */
  backdrop-filter: blur(10px);
}

Note: It’s preferred to declare prefixed properties before the unprefixed variant so they serve as a fallback for browsers that don’t (yet) support them or their implementation is different.

A Touch of Synergistic Shadow

I think adding a slight semi-opaque black box-shadow that covers the blur area gives the effect a little extra depth. The only thing is that you’ll want to add it to the element itself rather than it’s ::before pseudo:

div {
  box-shadow: 0 0 40px #00000099;
}

That’s totally optional, though.

Bringing Everything Together

Here’s how the CSS comes out when we combine everything together.

/* This can be set on the ::before pseudo of the element it is applied to. */
::before {
content: ''; /* This layer is positioned between some element and its background. */ position: absolute; /* This should not affect the contents of the container. */ z-index: -1; /* The blur size should be anything below 0 so it will extend to the outside. */ inset: -100px; /* The blur effect */ -webkit-backdrop-filter: blur(10px); /* Required for safari */ backdrop-filter: blur(10px); /* A mask fades the blur effect, so it gets weaker. */ /* towards the edges, further from the container box. */ /* (The fill color is irrelevant, so "red" is used as it's the shortest color name.) */ mask: linear-gradient( to top, transparent 0%, red 100px calc(100% - 100px), transparent 100%), linear-gradient( to left, transparent 0%, red 100px calc(100% - 100px), transparent 100%); /* This merges the masks above so only the overlapping pixels are rendered. */ /* This creates the illusion of a fade-out mask. */ mask-composite: intersect; -webkit-mask-composite: source-in; /* Required for Safari */ }
The Final Demo, One More Time

See the Pen Faded Outer Box Backdrop Blur [forked] by Yair Even Or.

I’ve also prepared a simplified version with minimal code and no CSS variables that’s easier to read and re-purpose.

]]>
hello@smashingmagazine.com (Yair Even Or)
<![CDATA[The AI Dilemma In Graphic Design: Steering Towards Excellence In Typography And Beyond]]> https://smashingmagazine.com/2024/01/ai-dilemma-graphic-design-typography/ https://smashingmagazine.com/2024/01/ai-dilemma-graphic-design-typography/ Tue, 23 Jan 2024 13:00:00 GMT Imagine it’s 2028, and you’re at your projected, suped-up workstation. “Hey AI,” you say, “I need some type options for this page heading…” Before finishing, your AI assistant, affectionately nicknamed TypeMaster3000, eagerly interrupts: “Something bold yet whimsical? Or Perhaps a serif that subtly says, ‘I’m sophisticated but know how to party’?”

You roll your eyes, “Just show me ten options. And no disco serifs this time.”

Gone are the days of clunky, AI-generated fonts that struggled to produce consistent, quality designs. Licensing issues? A thing of the past. The AI of 2028 presents you with multilingual, inventive font families, each glyph crafted to perfection. But perfection isn’t without its quirks.

As TypeMaster3000 rolls out its instantly generated font options, each design seems to have a touch of your recent seaside holiday. There’s Sandy Serif and Desert Island Display.

You sigh. “Less beach, more business, please.”

“Understood,” TypeMaster3000 chirps. “Reverting to corporate mode!”

You spot a typeface you like, and with a tap, the font slots into your design, aligning proportionally and positionally as if it was always meant to be there.

The Evolution of Technology In Typography

Back in the present, the creation of new, professional typefaces remains a meticulous and time-consuming endeavor, even with modern software. Throughout its history, the type industry has consistently been at the forefront of technological evolution, from wood to metal, film, and digital.

Each innovation has transformed type production and broadened access for designers, both in making and using type. Like all industries, we are poised at the base camp of the AI mountain, bracing ourselves for a steep and transformative climb.

Predictions of the medium-term impact of artificial intelligence on type design generally converge around two main scenarios:

  1. AI as a collaborative tool (AI as co-Pilot)
    In this scenario, AI assists in the type design process, taking on time-consuming tasks like creating bold versions or italics of a typeface. This would benefit type designers by streamlining their workflow and allowing more room for creative exploration without the burden of repetitive tasks.
  2. Fully AI-Generated Fonts (AI as autopilot)
    As with our TypeMaster3000 scenario, AI would independently create fonts in this scenario, likely resulting in a surge of free, enthusiast-prompted typefaces. Initially, these may lack the innovation, consistency, and craftsmanship of professional designs, so the market will likely lean towards more dependable, expertly generated AI fonts.

Over time, however, it is expected that we will gravitate towards autopilot fonts, as even naively prompted AI results (e.g., “Make me a seaside holiday font”) will begin to match, then surpass, human-made efforts. Both scenarios seem like good news for type users, offering a wider variety of fonts. But this change will completely disrupt the type industry.

A Gutenberg-Scale Transformation

Yet, this vision is far from the summit of our AI mountain. While disruptive, it marks a crucial and necessary transition for the type industry towards a groundbreaking future. While the journey may be challenging, AI is poised not just to generate innovative fonts but to fundamentally revolutionise our text communication, paving the way for a new era of dynamic and interactive typography.

Despite previous technological advances, typography actually hasn’t changed much since its invention almost 600 years ago, and much scribal creativity was sacrificed to make text more accessible. The next evolutionary step will be dynamic, context-sensitive typefaces. These would provide more nuanced and precise forms of communication, tailoring text to specific contexts and user needs.

This typographic revolution will significantly benefit our global civilization and should be our ultimate aim.

Current Advances In The AI Revolution

AI image generation, especially in deep learning, is advancing fast. Focussed mainly on pixel-based imagery, it achieves impressive results. These are created using neural networks to manipulate individual pixels, like creating a digital mosaic. Yet vector graphics, integral to font creation, are moving at a slower pace, with only a handful of papers surfacing in 2023.

Vector graphics, defined by Bézier curves, present a more complex challenge for neural encoding due to their algorithmic nature. Yet, there’s growing momentum in adapting language model techniques to this field, showing promise for more sophisticated applications.

One area of notable progress is style-transfer research, where AI learns and applies the style of one image to another. This would be like fusing a Roundhand script style into a modern Sans Serif, creating something like Helvetica with swashes and stroke contrast.

Significant strides are also being made in so-called few-shot font generation tasks, which involve AI learning a font’s style from a few initial characters and then extrapolating it to generate a complete font.

This approach has enormous commercial and creative potential, especially for designing multilingual fonts and those with huge character sets like Japanese and Chinese fonts.

While AI’s potential for producing vector graphics and typography is still in the early stages, the current direction shows a promising future, gradually overcoming the complexities and opening new avenues for designers.

Guiding the Future: The Crucial Role Of Designers In AI Typography

Given this trajectory and the lofty claims of what AI may do in the future, creative professionals are rightly contemplating its short-term implications. Designers are increasingly concerned that their specialised skills, including typography, might be overlooked in a landscape filled with AI-aided enthusiasts.

To preserve our creative integrity and professional effectiveness, it’s crucial for designers to influence the development of AI tools and insist on high design standards to positively shape the future of our industry.

Despite initial fears and controversies, Gutenberg’s press became one of history’s most transformative inventions. AI, too, holds a similar potential, but its direction depends on our approach.

The Designer’s Dilemma: Embracing AI While Maintaining Quality

We face a choice: harness artificial intelligence to boost our creativity and efficiency or risk allowing naive automation to erode the quality of our work. Rather than being passive spectators, we must actively steer AI advancements toward quality-driven outcomes, ensuring these tools enhance rather than diminish our design capabilities.

It has been noted that designers can harness AI tools more effectively because they possess a deep understanding of how to construct an idea. But embracing these new tools doesn’t mean relaxing our guard and allowing standards to be set for us. Instead, we should use AI as a springboard for inspiration and innovation.

For example, current AI-generated imagery often yields unexpected results due to a combination of unwieldy text prompts and massive data sets. But it can be an effective tool for inspiration and to spark new ideas.

Holding The Line In AI Typography

In typography, designers will need to be more vigilant when selecting typefaces. A flood of potentially original and inventive amateur fonts may flood the market, requiring more than just surface-level assessment of their quality. Designers will need to check their character sets, spacing, and overall design more carefully.

Using typefaces skillfully is more important than ever, as it will not only make work stand out but also influence industry trends and standards to inspire and guide type designers.

Adapting To AI In Type Design

The development and direction of AI tools don’t need to be solely in the hands of large corporations investing billions into the technology. A positive step forward would be for type-foundries to collaborate, pooling their resources to create a collective AI software model. This cooperative approach would enable them to not only capitalise on AI-driven innovations but also safeguard their unique designs from unauthorised use by others.

Furthermore, research indicates that smaller AI models can sometimes outperform their larger counterparts, opening doors for independent foundries to develop custom, small-scale AI tools tailored to their specific needs.

Designers Shaping the Future: From Static Typography To AI-Driven Innovation

While a wave of mixed-quality amateur fonts is a concern, AI is poised to significantly enhance the quality and innovation of professionally crafted typefaces. In partnership with developers, type designers will lead the next evolution of type.

What we’ve become used to in terms of typography is woefully static, lacking the ability to dynamically adjust to content, context, or reader interaction. At present, our options are limited to changing font styles and incorporating emojis.

Historically, scribes were adept at creating text with emphasis and embellishments, enriching the transfer of information. When Johannes Gutenberg invented his printing press, his goal wasn’t to surpass scribes’ artistry but to bring knowledge and information to the masses. Gutenberg succeeded as far as that is concerned, but it left behind the scribes’ nuanced abilities to visually enhance the text, even if the type has evolved creatively along the way.

Typography’s Destiny

The next revolution in typography ought to usher in an era of fluidity, adaptability, and interactivity in textual presentation. The type ought to act more like custom lettering. This shift would significantly enhance the reader’s experience, making written communication more versatile, precise, and responsive to various factors such as:

  • Content sensitivity
    Text might change based on the content it’s displaying. For example, changing style and rhythm for the climax of a book or floating playfully when reading an uplifting poem.
  • Environmental adaptability
    Text changes in response to different lighting or the reader’s distance from the text.
  • Emotional expression
    Incorporating elements that change based on the emotional tone of the text, like color shifts or subtle animations for expressive communication.
  • User interaction
    Text could vary depending on the user’s reading speed, eye movement, or even emotional responses detected through biometric sensors.
  • Device and platform responsiveness
    We could have text adapted for optimal readability, considering factors like screen size, resolution, and orientation without having to “guess” in CSS.
  • Accessibility Enhancements
    Imagine situations where text dynamically adjusts in size and contrast to accommodate young, dyslexic, or those with visual impairments.
  • Language and cultural adaptation
    For example, a type could effortlessly transition between languages and scripts while maintaining the design’s typographic intention and adapting sensitively to cultural nuances.
Conclusion: Embracing The Future Of Design

We stand at the threshold of a monumental shift in typography. Like every industry, we’re entering a period of significant transformation. Future scenarios like TypeMaster3000 show how turbulent the journey will be for the industry. But it is a journey worth making to push beyond the barriers of static type, advance our creative capabilities, and foster better communication across cultures.

Change is coming, and as designers, it’s not enough to merely accept that change; we must actively steer it, applying our expertise, taste, and judgment. It’s crucial that we collectively guide the integration of AI in typography to do more than automate — we must aim to elevate. Our goal is dynamic, precise, and contextually responsive typography that transcends the static utility of fonts.

By guiding AI with our collective creativity and insights, we can not only augment our creativity but raise design standards and enrich our entire civilization.

]]>
hello@smashingmagazine.com (Filip Paldia & Jamie Clarke)
<![CDATA[The Complex But Awesome CSS border-image Property]]> https://smashingmagazine.com/2024/01/css-border-image-property/ https://smashingmagazine.com/2024/01/css-border-image-property/ Tue, 16 Jan 2024 13:00:00 GMT The border-image property is nothing new. Even deprecated Internet Explorer supports it, so you know we’re treading well-charted territory. At the same time, it’s not exactly one of those properties you likely keep close at hand, and its confusing concepts of “slicing” and “outsets” don’t make it the easiest property to use.

I’m here to tell you that border-image is not only capable of producing some incredibly eye-catching UI designs but is also able to accomplish some other common patterns we often turn to other properties and approaches for.

In this article, my plan is to dispel the confusing aspects of border-image without getting into a bunch of theoretical explanations and technical jargon. Instead, we’re going to have fun with the property, using it to create shapes and put a different spin on things like range sliders, tooltips, and title decorations.

By the end of this, I hope that border-image becomes your new favorite property just as it has become mine!

The Concept of Image Borders

There are a few specific aspects of border-image that I think are crucial for understanding how it works.

It’s Easy To Accidentally Override A Border Image

The CSS Backgrounds and Border Module Level 3 specification says border-image should replace any regular border you define, but it’s not always the case. Try the code below, and all you will see is a red border.

/* All I see is a red border */
.element {
  border-image: linear-gradient(blue, red) 1;
  border: 5px solid red;
}

That’s because we’re technically declaring border after border-image. Order really does matter when working with border-image!

/* 👍 */
.element {
  border: 5px solid red;
  border-image: linear-gradient(blue, red) 1;
}

You can already see how this could be confusing for anyone jumping into border-image, especially for the first time. You will also notice that our gradient border has a thickness equal to 5px, which is the border-width we defined.

I make it a personal habit not to use border and border-image together because it helps me avoid overriding the border image I’m trying to create and be able to control the border decoration using only one property (even if both can be used together). So, if you get a strange result, don’t forget to check if you have a border declared somewhere.

It Is Painted Over Backgrounds And Box Shadows

The second tip I want to offer is that border-image is painted above the element’s background and box-shadow but below the element’s content. This detail is important for some of the tricks we will use in later examples. The following Pen demonstrates how a border image is applied in that order:

If we were to translate the figure above into code using the provided variables as values, it would look like this:

border-image:
  linear-gradient(...)
  s-top s-right s-bottom s-left / 
  w-top w-right w-bottom w-left /
  o-top o-right o-bottom o-left;

By default, border-image considers the boundaries of the element (illustrated with the blue dotted border in Figure 1) as its area to paint the gradient, but we can change this using the <outset> to increase that area and create an overflow. This is super useful to have “outside” decorations.

Then, the <width> is used to split the area into nine regions, and the <slice> is used to split the source (i.e., the gradient) into nine slices as well. From there, we assign each slice to its corresponding region. Each slice is stretched to fill its assigned region, and if they don’t share equal dimensions, the result is typically a distorted image slice. Later on, we will learn how to control that and prevent distortion.

The middle region is kept empty by default. That said, it is totally possible to use the fill keyword to do what it says and fill the middle region with slice nine (which is always the center slice).

border-image: linear-gradient(...) fill
  s-top s-right s-bottom s-left / 
  w-top w-right w-bottom w-left /
  o-top o-right o-bottom o-left;

I know this was a pretty fast primer on border-image, but I think it’s all we need to do some pretty awesome stuff. Let’s jump into the fun and start experimenting with effects.

Gradient Overlay

Our first experiment is to add a gradient overlay above an existing background. This is a fairly common pattern to improve the legibility of text by increasing the contrast between the text color and the background color.

There are several well-known approaches to setting an overlay between text and content. Here’s one from Chris Coyier back in 2013. And that isn’t even the most widely-used approach, which is likely using pseudo-elements.

But border-image gives us a one-line way to pull it off:

.overlay {
  border-image: fill 0 linear-gradient(#0003,#000); 
}

That’s all! No extra element, no pseudo-element, and no need to modify the background property.

Well, guess what? The border-image property can pull it off with one line of code:

.full-background {
  border-image: conic-gradient(pink 0 0) fill 0//0 100vw;
}

If you compare what we just did with the gradient overlay example, the <outset> is the only difference between the implementations. Other than that, we have a single slice placed in the middle region that covers the entire area we extended to the edge of the screen.

We are not limited to a solid color, of course, since we are working with gradients.

Fancy Headings

Another thing we can use border-image for is decorating headings with fancy borders. Let’s start with the exact same implementation we used for the full-width backgrounds. Only this time, we’re replacing the conic-gradient() with a linear-gradient():

.full-background {
  border-image: linear-gradient(0deg, #1095c1 5px, lightblue 0) fill 0//0 100vw;
}

Now we apply this to an <h1> element:

So, that’s two different ways to get the same effect using the same border-image syntax. We can actually get this a third way as well:

.full-line {
  border-image: conic-gradient(#1095c1 0 0) 0 0 1 0/0 0 8px 0/0 100vw 0 0;
}

This time, I have defined a bottom slice equal to 1 (unitless values are computed as pixels), which produces two slices, the seventh (bottom center) and the ninth (center). From there, I have set the seventh region to a height of 8px. Note that I am not using the fill keyword this time, so the middle region is not filled like it was last time. Instead, we only fill the seventh region that takes up 100% of the boder-image’s area and 8px of its height.

You’re wondering why I am defining a slice equal to 1, right? The goal is to have only two slices: the seventh (bottom center) and the ninth (middle), and since we are applying a solid color, the size doesn't matter. That’s why I used 1; a small positive value. Any value will work (e.g., 0.5, 2, 100, 50%, 76%, and so on); it’s just that 1 is shorter to type. Remember that the slice will get stretched within its region, so 1px is enough to fill the whole region.

Here’s the deal: The slice value doesn’t really matter when working with a solid coloration. In most cases, the value winds up being 0 (empty) or 1 (filled). You can think of it as binary logic.

We could do this a fourth way!

.full-line {
  border-image: conic-gradient(#1095c1 0 0) 0 1 0 0/calc(100% - 8px) 100% 0 0/0 100vw 0 0;
}

I’ll let you do the work to figure out how the above CSS works. It’s a good opportunity to get a feel for slicing elements. Take a pen and paper and try to identify which slices we are using and which regions will be filled.

One thing that makes border-image a complex property is all the different ways to achieve the same result. You can wind up with a lot of different combinations, and when all of them produce the same result, it’s tough to form a mental model for understanding how all of the values work together.

Even though there is no single “right” way to do these heading borders, I prefer the second syntax because it allows me to simply change one color value to establish a “real” gradient instead of a solid color.

.full-line {
  border-image: repeating-linear-gradient(...) fill 0 /
    calc(100% - var(--b)) 0 0/0 100vw 0 0 repeat;
}

Let’s try another syntax for the same effect:

h2 {
  --s: 3px;   /* the thickness */
  --w: 100px; /* the width */
  --g: 10px;  /* the gap */
  border-image: 
     conic-gradient(red 0 0) 
     0 50%/calc(50% - var(--s)/2) var(--w)/0 calc(var(--w) + var(--g));
}

The top and bottom values of the <slice> are equal to 0, and the left and right ones are equal to 50%. This means that slices six and eight share the gradient. All the other slices — including the center — are empty.

As far as the regions go, the top and bottom regions (consisting of regions 1, 5, and 2 at the top and regions 4, 7, and 3 at the bottom) have a height equal to 50% - var(--s)/2 leaving the --s variable as a height for the remaining regions (6, 8, and 9). The right and the left regions have a width equal to the --w variable. Since slices 6 and 8 are the only ones that are filled, the only regions we need to care about are 6 and 8. Both have a height equal to the border’s thickness, --s, and a width equal to --w.

I think you know how the rest of the story goes.

Notice I am using 50% as a slice. It demonstrated how any value does the job, as we discussed in the last section when I explained why I chose to use a value of 1 but also to prepare for the next effect where I will be using a real gradient:

See the Pen Horizontal lines around your title with gradient coloration by Temani Afif.

When it comes to real gradients, the value of the slice is important, and sometimes you need very precise values. To be honest, this can be very tricky, and I even get lost trying to figure out the right value.

Let’s end this section with more examples of title decorations. When combined with other properties, border-image can make really nice effects.

See the Pen Fancy title divider with one element by Temani Afif.

See the Pen Fancy title divider with one element by Temani Afif.

More Examples

Now that we’ve seen several detailed examples of how border-image, I’m going to drop in several other examples. Rather than explaining them in great detail, try to explain them in your own words by inspecting the CSS, and use these as inspiration for your own work.

Infinite Image Decorations

When it comes to images, border-image can be a lifesaver since we don’t have access to pseudo-elements. Here are some cool infinite decorations where we can have a touch of 3D effect.

See the Pen Infinite image shadow by Temani Afif.

See the Pen Infinite image shadow II by Temani Afif.

See the Pen Infinite image stripes shadow by Temani Afif.

See the Pen 3D trailing shadow for images by Temani Afif.

If you check the code in these examples, you will find they share nearly the same structure. If you have trouble recognizing the pattern, please don’t hesitate to leave a comment at the end of this article, and I would be happy to point it out.

Custom Range Slider

I wrote a detailed article on how to create the following example, and you can refer to it for range slider variations using the same technique.

See the Pen CSS only custom range sliders by Temani Afif.

I used border-image and styled only the “thumb” element. Range inputs are known to have different implementation cross-browser, but the “thumb” is common between all of them.

Ribbon Shapes

In case you missed it, I have created a collection of more than 100 single-element ribbon shapes, and some of them rely on border-image. I call them the “infinite ribbons.”

See the Pen Full screen Ribbon title by Temani Afif.

See the Pen Infinite Ribbon Shapes by Temani Afif.

Heart Shapes

I have written about CSS heart shapes using different approaches, and one of them uses a border-image technique.

.heart {
  width: 200px;
  aspect-ratio: 1;
  border-image: radial-gradient(red 69%,#0000 70%) 84.5%/50%;
  clip-path: polygon(-42% 0,50% 91%, 142% 0);
}

See the Pen Heart shape using border-image by Temani Afif.

The interesting part here is the slice that is equal to 84.5%. That is a bigger value than 50%, so it may seem incorrect since the total exceeds 100%. But it’s perfectly fine because slices are able to overlap one another!

When using values bigger than 50%, the corner slices (1, 2, 3, and 4) will share common parts, but the other slices are considered empty. Logically, when using a slice equal to 100%, we will end with four slices containing the full source.

Here is an example to illustrate the trick:

See the Pen Overview of the slice effect by Temani Afif.

The slider will update the slice from 0% to 100%. On the left, you can see how the corner slices (1-4) grow. Between 0% and 50%, the result is logical and intuitive. Bigger than 50%, you start having the overlap. When reaching 100%, you can see the full circle repeated four times because each slice contains the full gradient, thanks to the overlap.

It can be confusing and not easy to visualize, but overlaps can be really useful to create custom shapes and fancy decorations.

Tooltips

What about a simple tooltip shape with only two properties? Yes, it’s possible!

See the Pen A simple Tooltip using 2 CSS properties by Temani Afif.

.tooltip {
  /* triangle dimension */
  --b: 2em; /* base */
  --h: 1em; /* height*/

  border-image: conic-gradient(#CC333F 0 0) fill 0//var(--h);
  clip-path: 
    polygon(0 100%,0 0,100% 0,100% 100%,
      calc(50% + var(--b)/2) 100%,
      50% calc(100% + var(--h)),
      calc(50% - var(--b)/2) 100%);
}
Filling Border Radius

Unlike most decorative border properties (e.g., box-shadow, outline, border, and so on), border-image doesn’t respect border-radius. The element is still a box, even if we’ve rounded off the corners. Other properties will recognize the visual boundary established by border-radius, but border-image bleeds right through it.

That could be a drawback in some instances, I suppose, but it’s also one of the quirky things about CSS that can be leveraged for other uses like creating images with inner radius:

See the Pen Inner radius to image element by Temani Afif.

Cool, right? Only one line of code makes it happen:

img {
  --c: #A7DBD8;
  --s: 10px; /* the border thickness*/

  border-image: conic-gradient(var(--c) 0 0) fill 0 // var(--s);
}

We can even leave the center empty to get a variation that simply borders the entire element:

See the Pen Rounded images inside squares by Temani Afif.

Conclusion

Did you know border-image property was such a powerful — and flexible — CSS property? Despite the challenge it takes to understand the syntax, there are ways to keep the code clean and simple. Plus, there is often more than one “right” way to get the same result. It’s a complicated and robust CSS feature.

If the concepts of slicing and defining regions with border-image are still giving you problems, don’t worry. That’s super common. It took me a lot of time to fully understand how border-image works and how to use it with different approaches to the syntax. Give yourself plenty of time, too. It helps to re-read things like this article more than once to let the concepts sink in.

Complexities aside, I hope that you will add border-image to your toolbox and create a lot of magic with it. We can do even more with border-image than what was demonstrated here. I actually experiment with this sort of stuff frequently and share my work over at my CSS Tip website. Consider subscribing (RSS) to keep up with the fun and weird things I try.

Special thanks to @SelenIT2, who pushed me to explore this property and wrote an excellent article on it.

]]>
hello@smashingmagazine.com (Temani Afif)
<![CDATA[Top Front-End Tools Of 2023]]> https://smashingmagazine.com/2024/01/top-frontend-tools-2023/ https://smashingmagazine.com/2024/01/top-frontend-tools-2023/ Thu, 11 Jan 2024 18:00:00 GMT Over the past 12 months, I’ve shared hundreds of tools in my newsletter, Web Tools Weekly. I feature tons of practical libraries, helpers, and other useful things for front-end and full-stack developers. These tools span numerous categories, including JavaScript libraries and utilities, web frameworks, CSS generators, database tools, React components, CLI tools, and even ChatGPT and AI-based tools, the latter of which I’ve started covering regularly over the past year.

The 60 tools in this article were some of the most clicked web developer tools in my newsletter in 2023. As you’ll see, most of these are quite practical for front-end and full-stack development, so you’ll likely find lots that you’ll want to bookmark or use in an upcoming project. The list is roughly in reverse order in terms of popularity, so be sure to scroll down to see what the most popular tools of the year were!

Kuma UI

Kuma UI, which describes itself as “the future of CSS-in-JS”, is a headless, utility-first, zero-runtime component library that includes its own CSS-in-JS solution.

What makes Kuma UI different is its hybrid approach that allows for dynamic changes to your styles at runtime while still keeping the performance benefits of zero-runtime CSS-in-JS.

Boxslider

Although the use of carousel components has been discouraged in recent years, they still get asked for by my clients, and developers are always on the lookout for them. Boxslider is one such component.

This carousel, or content slider, includes seven slide transition effects that you can try out on the demo page, including a 3D cube effect, tile flip, and a simple fade.

Effect

Effect is described as “a powerful TypeScript library designed to help developers easily create complex, synchronous, and asynchronous programs.”

The idea behind the effect is to help developers build robust and scalable applications by means of something called structured concurrency, a programming paradigm that allows multiple complex operations to run simultaneously.

HatTip

If you use Express.js for building Node.js apps, you’ll want to check out HatTip. It offers a solution similar to Express.js, but with a more universal approach.

HatTip is a set of JavaScript packages for building HTTP server apps and allows you to write server code that can be deployed anywhere – AWS, Cloudflare Workers, Vercel, and more.

LiveViewJS

LiveViewJS is a simple yet powerful framework for building “LiveViews” in Node.js and Deno. LiveViews were popularized in Elixir’s Phoenix framework and involved moving state management and event handling to the server and doing HTML updates via WebSockets.

This technique allows you to build single-page app experiences with features like fast first paint, real-time and multi-player functionality, no need for a client-side routing solution, and lots more.

Scrollbar.app

Scrollbar.app is a one-stop reference and code generation tool for customizing browser scrollbars. You can live test and adjust the scrollbars directly on the page, then copy the CSS.

The scrollbar code involves using vendor-specific pseudo-elements but also incorporates the future-friendly scrollbar-color.

OpenGPT

OpenGPT is one of many ChatGPT-based tools that have been making the rounds over the past year or so. This one is an open-source AI platform that allows anyone to use and create ChatGPT-based applications.

The main platform for the service itself allows you to search a categorized directory of more than 11,000 ChatGPT apps.

Free Icons

Icon sets always seem to make these end-of-year lists. Free Icons is a generically named set of 22,000+ icons that includes both brand icons and general-use ones.

All are in SVG format, and you can filter by keyword on the home page or grab the whole lot via the GitHub repository.

Materialize

Materialize is an open-source framework of UI components based on Google’s Material Design guidelines.

The project, which includes 20+ categories of components, is a fork of an older project that’s no longer maintained.

qr-code

qr-code is an SVG-based web component that generates an animatable and customizable QR code. There’s an interactive demo page where you can try out the different animation effects.

The resulting QR code is SVG-based, the component has no dependencies, and it is easy to customize.

GradientGenerator

GradientGenerator is an interactive CSS gradient builder that allows you to build advanced layered gradients. You can customize your layered gradient using a whole slew of different settings and features.

The app also allows you to save gradients to your library and even import community-built gradients.

iDraw.js

iDraw.js is a simple JavaScript framework for creating apps that allow Canvas-based drawing.

There are some nice examples in a live playground where you can see the simplicity and ease of use of the API.

VanJS

VanJS is a UI library similar to React but doesn’t use JSX, virtual DOM, transpiling, and so on. The idea is to avoid the overhead of configuration that’s normally associated with using a library like React.

The library claims to be the smallest UI library in the world at under 1kb. It has first-class support for TypeScript and naturally boasts strong performance compared to React, Vue, and so on.

Mamba UI

Mamba UI is the first of multiple Tailwind-based tools that made this year’s list. This is a UI library of 150+ components and templates based on the popular utility-first CSS framework.

The library includes pre-styled components in 40+ categories, and you can quickly grab the code for any component in HTML, Vue, or JSX format.

Termino.js

Termino.js is a dependency-free JavaScript component that lets you add embedded terminal-based animations, games, and apps to web pages.

It’s customizable and makes it easy to build terminal animations like keyboard typing effects. The demo page includes a few simple examples, including an embedded terminal app where the user can get info on any GitHub profile.

SVG Chart Generator

SVG Chart Generator is a beautifully designed chart generator that lets you generate SVG-based charts in line or bar format.

The generator allows you to interactively customize the chart with width/height settings, number of points, smoothness, and more. You can also import your own data points.

PeepsLab

PeepsLab is a simple online tool to customize your own unique illustrated user avatars. You can build your own avatars by cycling through the different options for skin color, hair color, facial hair, accessories, styles for head, face, etc.

Or you can simply hit the “Randomize” button to generate a random avatar before downloading it in PNG format.

Ribbon Shapes

Ribbon Shapes is an online gallery of pure CSS ribbons in just about any ribbon format you can imagine.

The gallery includes more than 100 ribbons, each created with a single HTML element and easy to customize using CSS variables.

big-AGI

big-AGI is a personal GPT-powered app that’s described as “the GPT application for professionals that need function, form, simplicity, and speed.”

It has a responsive, mobile-friendly interface and includes features like AI personas, text-to-image, voice, response streaming, code highlighting and execution, PDF import, and lots more.

Easy Email

Easy Email is a drag-and-drop email editor based on MJML, the popular HTML email authoring framework.

This solution allows you to transform structured JSON data into HTML that’s compatible with major email clients. Includes features for easily customizing blocks, components, and configuring themes.

CSS Components

CSS Components throws its hat into the CSS-in-JS space with this fresh solution, described as “not another styling system.”

This solution is a response to challenges inherent in using CSS-in-JS tools with React Server Components, and the library is inspired by another such tool, Stitches, and promises an improved developer experience.

Toaster

Toaster is an experimental pure CSS 3D editor that allows you to build models using pure HTML with CSS transforms.

The author acknowledges that the tool isn’t too practical and can currently only export/import in JSON format (no CSS export). With improved performance, this could be a useful tool.

Fontpair

Fontpair isn’t a new resource, but it makes this year’s list. It’s a font directory specifically for finding fonts that match well together in your designs.

All the fonts are sourced from Google Fonts, and the pairings are manually curated by the authors.

Breadit

Breadit is a modern, full-stack Reddit clone built with Next.js App Router, TypeScript, and Tailwind.

This is a nice app to learn and experiment with, featuring infinite scroll, NextAuth, image uploads, a feature-rich post editor, nested commenting, and lots more.

Keep React

Keep React is a Tailwind and React-based component library that includes 40+ components and interactive elements.

The components are pre-designed, but all the components are easy to customize using Tailwind classes and are suitable for just about any project.

TW Elements

TW Elements is a massive library of more than 500 Bootstrap components recreated using Tailwind CSS. This is a great option for those already familiar with Bootstrap and looking for a modern alternative.

The library boasts better overall design and functionality compared to the original components in the Bootstrap framework, and you can easily search for components by keyword from the home page.

Autocomplete

Autocomplete is an open-source, production-ready JavaScript library for building customizable autocomplete experiences for form inputs and search fields.

You can easily build an autocomplete experience by defining a container, data to populate it, and any virtual DOM solution (JS, React, Vue, Preact, and so on).

CSS Loaders

CSS Loaders is a huge collection of more than 600 CSS loading animations organized under more than 30 categories.

This gallery includes just about any style of loader you can think of, and you can easily copy/paste the HTML/CSS for any loader with just a click.

Flectofy

Flectofy is an interactive tool that provides an interface that allows you to build unique SVG shapes.

The styles of shapes here are pretty niche, so they wouldn’t be useful in too many contexts, but the way the interface works and the look of the shapes are certainly different.

Picyard

Picyard is an app that generates screenshots with attractive backgrounds for use in mockups, social media posts, and so on.

The image/background tool is free, but the app also includes premium features for generating attractive code snippets, charts, mindmaps, timelines, and lots more.

UI Content

UI Content is described as “the best place to find professional placeholder text.” Includes placeholder text under seven different categories and also includes dummy SVG logos.

The idea here is to avoid typical “lorem ipsum” and use actual content instead to ensure your designs look closer to what the final product will be.

Vessel.js

Vessel.js was one of the more unique projects I discovered over the past year. It’s a JavaScript library based on Three.js, the WebGL library, for conceptual ship design, in other words, building boats.

You can check out a number of examples in a gallery, and there’s also a tutorial that gets you up to speed on best practices for using the library — assuming this happens to be your niche!

Modern Font Stacks

Modern Font Stacks is a resource to help you identify the best-performing font stacks. That is, the stacks are based on pre-installed default OS fonts.

You can choose from specific typographic categories like Traditional, Old Style, Neo-Grotesque, Monospace Code, Handwritten, and more. Again, these are generally fonts that are already available on Windows, Mac, Linux, iOS, and Android, giving you the best possible support without extra resource requests.

FancySymbol

FancySymbol is a huge repository of ready-to-copy/paste special characters, text symbols, foreign language symbols, and more.

Includes more than 50 categories of symbols and also allows you to create unique and fancy copy/paste-able text like upside-down text or text written in “invisible ink,” among others.

Observable Plot

Observable Plot is a JavaScript library for creating exploratory data visualizations (i.e., “plots”) using SVG-based charts.

The interface for the plots can include specific features like scales, projections, legends, curves, markers, and more. You’ll have to check out the documentation for the lowdown on these different features, which are illustrated using lots of interactive examples.

Washington Post Design System

The Washington Post Design System is a UI kit specifically built for properties associated with the Washington Post, a popular American daily newspaper and news outlet.

Although it is designed for WaPo’s engineers, it’s MIT-licensed and built-in React using Stitches and Radix UI. So, the customizable components and other assets may be of use if you’re using a similar tech stack.

FormSpamPrevention

FormSpamPrevention isn’t a popular project, but it got quite a bit of traction when I shared it this past year. It offers a simple vanilla JavaScript and HTML solution for preventing form spam.

The script is based on using custom HTML tags for form content that gets converted to valid HTML tags.

Chatbox

Chatbox is a native app for Windows, Mac, and Linux that gives you access to an AI copilot on your desktop.

This particular tool isn’t strictly focused on web development, but it taps into various LLM models and can be used as an overall productivity app for all sorts of daily tech-related tasks.

CSS Generators

CSS Generators is not a single tool but a small collection of CSS generators, a popular kind of tool among front-end developers.

I like this set of generators because it has a few kinds you don’t see elsewhere: Two glow generators (for text and elements) and an underline generator.

Leporello.js

Leporello.js is an interactive functional programming IDE for JavaScript. This means your code is executed instantly as you type, potentially improving debugging processes.

Most of us are likely set on using a particular IDE, but if you’re into experimenting with new ones, this might be a good one to check out.

Calligrapher.ai

Calligrapher.ai is an online tool for AI-generated handwriting samples that you can download as SVG.

There is no need to “write” anything; just type some text and customize stroke width and legibility, and the AI will do the rest. You can choose from 9 different print and cursive styles before generating the sample.

Clone UI

Clone UI is an AI-based tool that lets you generate UI components with a simple text prompt.

The app includes five free daily credits and includes a showcase of existing UI components generated by users.

Float UI

Float UI is a set of 100+ responsive and accessible UI components with RTL support. Also includes five templates.

The components and templates are built with Tailwind and are easy to customize. You can use them with React, Vue, and Svelte, or you can simply use HTML with Tailwind classes.

Calendar.js

Calendar.js is one of numerous date picker and calendar libraries available. This solution is lightweight and has no dependencies.

It’s fully configurable and includes drag-and-drop for events, exporting features, import from iCal and JSON, and lots more.

PCUI

PCUI is yet another React-based component library that makes a list. This one provides a set of pre-styled components.

There’s a storybook that demonstrates all the basic components, and you can also view a few UI examples that show a few advanced examples in action (a to-do list and an example that keeps a “history” of the UI’s state).

Accessible Color Palette Generator

Accessible Color Palette Generator is a great way to ensure any of your designs start with an accessible set of color choices.

You can generate a random accessible palette or enter any color, and the tool will generate an accessible palette for you based on the color you selected.

Picography

Picography is an alternative to the popular Unsplash and similarly offers high-resolution, royalty-free stock photos.

The photos are categorized and searchable and available for free use in commercial projects.

Mailo

Mailo is a component-based, interactive HTML email layout designer that helps you easily build cross-client and responsive HTML emails.

Mailo includes pre-built components and team features, and the components are designed to work with just about any email client.

Pines

Pines is an aptly named UI component library that’s built with Tailwind and Alpine, the popular JavaScript framework that’s similar to a modern version of jQuery.

Pines includes dozens of components, including animations, sliders, tooltips, accordions, modals, and more.

Park UI

Park UI is a set of beautifully designed components built on top of Ark UI, which itself is a set of accessible and customizable components.

Park UI can help you build your own design system, and the home page includes a neat interactive widget that demonstrates how easy it is to style the components. You can use Park UI with React, Vue, Solid, Panda CSS, and Tailwind.

Iconhunt

Iconhunt is an icon search engine that lets you have access to 170,000+ free, open-source icons.

The icons can be downloaded in various formats, including Notion, Figma, SVG, or PNG, and you can customize the color of any icon you choose before downloading.

Sailboat UI

Sailboat UI is a Tailwind-based UI component library that includes 150+ open-source components.

The components are very Bootstrap-esque, and you can search for and see live previews of the components in the docs.

Shaper

Shaper is a generative design tool for UI Interfaces that allows you to visually fiddle with a number of different interface features to customize your own UI.

It includes settings for custom typography, spacing, vertical rhythm, and so on, after which you can copy and paste the design tokens as CSS variables.

Maily

Maily is an open-source editor that makes it easy to create beautiful HTML emails using a set of pre-built components.

It currently includes components in categories covering buttons, variables, text formatting, images, logos, alignment, dividers, spacers, footers, lists, and quotes, with more on the way.

Realtime Colors

Realtime Colors offers an interactive website that lets you test color palettes and typography on real live UI elements in real-time.

You can use the tool to generate palettes and deep links to a specific palette for sharing with others or demoing interfaces in dark or light modes.

Strawberry

Strawberry is described as a “tiny” front-end framework that offers reactivity and composability with zero dependencies, no build step, and less than 3KB gzipped.

The idea here is not to offer a React or Vue alternative but something you’d use for simpler apps and other low-maintenance projects.

Swap.js

Swap.js is a JavaScript micro-library that uses HTML attributes to facilitate Ajax-style navigation in web pages in less than 100 lines of code.

This is in the same vein as libraries like HTMX and Hotwire, allowing you to replace content on the page by sending requests from the server as HTML fragments.

restorePhotos.io

restorePhotos.io is an open-source tool that uses AI to attempt to restore or correct old, blurry, or damaged photos.

You can deploy your own version locally or use their online tool to restore up to 5 photos per day for free.

Better Select

Better Select is a web component that provides a minimal custom select element, something web developers have been grappling with accomplishing for decades!

This solution offers a fallback option and includes a small set of options via attributes that customize the functionality and look.

Space.js

Interestingly, Space.js ended up being the most-clicked tool in my newsletter the past year.

It’s one of two sibling libraries that are based on Three.js. The main one is for creating “future” UIs and panel components, and the other (called Alien.js) is for 3D utilities, materials, shaders, and physics.

What Was Your Favourite Tool of 2023?

That wraps up this year’s roundup of the hottest front-end tools. I’m sure you’ll find at least a few of these to be of use in a new project in the coming months.

As always, I’m always looking for the latest in tools for front-end developers, so feel free to post your favourites from the past year in the comments, and you can subscribe to Web Tools Weekly if you want to keep up with new stuff regularly!

]]>
hello@smashingmagazine.com (Louis Lazaris)
<![CDATA[SolidStart: A Different Breed Of Meta-Framework]]> https://smashingmagazine.com/2024/01/solidstart-different-breed-meta-framework/ https://smashingmagazine.com/2024/01/solidstart-different-breed-meta-framework/ Mon, 08 Jan 2024 10:00:00 GMT The current landscape of web tooling is increasingly more complex than ever before. We have libraries such as Solid, Vue, Svelte, Angular, React, and others that handle UI (User Interface) updates in an ergonomic fashion. The ever more important topic weighing on developers is the balance and trade-off of performance and usability best practices.

Developers are also blurring the lines between front-end and back-end code. The way we colocate logic and data is becoming more interesting as we integrate and mesh the way they work together to deliver a unified app experience.

With these shifts in ideology in mind, meta-frameworks have evolved around the core libraries in unique ways. To encapsulate the paradigms in which the UI is rendered and create seamless interoperability between our server code and our browser code, new practices are emerging.

While the initial idea of having a “meta” framework was to stitch together different sets of tools in order to build smooth experiences, it is tough to create integrations without making some level of opinionated decisions. So frameworks such as QwikCity, SvelteKit, Redwood, and Next.js went all the way into their own opinionated territory and provided a hard railway to ensure a defined set of conventions.

Meanwhile, others like Nuxt, Remix, and Analog stayed with a more shallow abstraction of their integrations, allowing a mix of their toolings and more easily using resources from the community (Vite is a good example of a tool that is shallowly used by all of them).

This not only produces a lower vendor lock-in to developers but also allows configuration to be re-used in some cases as such decisions are stripped out of opinions in favor of stronger abstractions. SolidStart takes a giant step beyond that into unbiased territory. Its own core is around 1500 lines of code, and the biggest pieces of functionality are provided with a meshing of well-integrated tools.

Modules Over Monoliths

The impetus behind decoupling the architecture completely is to give power to the consuming developer to pick the framework and build it according to their desire. A developer may want to use Solid and SSR, but let’s imagine legacy code has a tight dependency on TanStack Router, while SolidStart and most Solid projects have Solid-Router instead. With a decoupled architecture, it becomes possible to either create an incremental adoption or integration layer so that everything will work tailored to the team’s best benefit.

The decoupled architecture sustaining newer frameworks also empowers the developer for a better debugging experience within and beyond its community. If an issue is encountered on the server, you’re not restricted to the knowledge of a specific framework.

For example, since both are based on Nitro, the Analog and SolidStart communities can now share knowledge with each other. Beyond that, because all of them are based in Nitro and Vite, Nuxt, Analog, and SolidStart can deploy to the same platforms and share implementation details to make each ecosystem grow together. The community wins with this approach, and the library/framework developers win as well. This is a radically new pattern and approach to jointly sharing the weight of meta-framework maintenance (one of the most feared responsibilities of maintainers).

What Is SolidStart Exactly?

SolidStart is built from five fundamental pillars:

  1. Solid: the view library that provides rendering abstractions.
  2. Vite (within Vinxi): the bundler to optimize code for execution in different runtimes.
  3. Nitro (within Vinxi): the agnostic web server created by the Nuxt team and based on h3 with Rollup.
  4. Vinxi: the orchestrator, what determines where the runtimes and the code each one has.
  5. Seroval: the data serializer that will bridge data from server to browser and back.

1. Solid

Solid as a rendering library has become increasingly popular because of its incredible rendering performance and thin abstraction layer. It’s built on top of Signals, a renewing and modern implementation of the classical Observer Pattern, and provides a series of helpers to empower the developer to create extremely high-performance and easy-to-read code.

It uses JSX and has syntax that is very similar to React, but under the hood, it operates in a completely different manner. Bringing the developer closer to the DOM while still providing the needed ergonomics to be productive in the developer environment. At only 3Kb of bundle size, it’s often a choice even for mostly static sites. e.g., many people use Solid to bring interactivity to their content-based Astro websites when needed.

Solid also brings first-class primitives, built-in Control Flow components, high-quality state management, and full TypeScript support. Solid packs a punch in a small efficient package.

2. Vite

Arguably the best bundler in the JavaScript ecosystem, Vite has the right balance between declarative and customizable configuration. It’s fully based on TypeScript, which makes it easy to extend by the consuming library or framework, and has a large enough user base that guarantees its versatility. Vite works with and has become the de-facto tool for many frameworks, such as Astro, Vue, Preact, Elm, Lit, Svelte, Nuxt, Analog, Remix, and many others.

Aside from its popularity, it is particularly loved for its fast server start time, HMR support, optimized builds, ease of configuration, rich plug-in ecosystem, modern tooling, and high-quality overall developer experience.

3. Nitro

A framework in itself, Nitro is written in TypeScript and is completely agnostic and open for every meta-framework to use as a foundation. It provides a powerful set of tools and APIs to manage caching, routes, and tree-shaking. It is a fast base for any JavaScript-based project to build their server on. Nitro is highly scalable, integrating easily into DevOps and CI/CD pipelines, security-focused, robust, and boasts a rich set of adapters, making it deployable on most, if not all, major vendor platforms.

Think of Nitro as a bolt-on extension that makes Vite easier to build on and more pliable. It solves a majority of run-time level concerns that would need to be solved in Vite.

4. Vinxi

Vinxi is an SDK (Software Development Kit) that brings a powerful set of configuration-based tools to create full-stack applications. It composes Nitro under the hood to establish a web server and leverages Vite for the bundling components. It is inspired by the Bun App API and works via a very declarative interface to instantiate an app by setting routers for each runtime.

For example:

import { createApp } from "vinxi";
import solid from "vite-plugin-solid";

const resources = {
    name: "public",
    mode: "static",
    dir: "./public",
};

const spa = {
    name: "client",
    mode: "build",
    handler: "./app/client.tsx",
    target: "browser",
    plugins: () => [solid({ ssr: true })],
    base: "/_build"
}

const server = {
    name: "ssr",
    mode: "handler",
    handler: "./app/server.tsx",
    target: "server",
    plugins: () => [solid({ ssr: true })],
}

export default createApp({
    routers: [resources, spa, server],
});

As resource routes work as a bucket, by defining mode: "static" there’s no need to define a handler. Your router can also be statically built (mode: “build”) and targeted towards the browser runtime, or it can be on the server and handle each request via its entry-point handler: "./app/server.tsx".

Vinxi will leverage the right set of APIs from Nitro and Vite so your resources aren’t exposed to the wrong runtimes and so that deployment works smoothly for defined platform providers.

5. Seroval

Once routers are set, and the app can handle navigation (hard navigation via the “ssr” handler and soft navigation via the “client” handler), it’s time to stitch them together. This is the part where SolidStart’s core comes into place. It supplies APIs that deliver the ergonomics to fetch and mutate requests.

All these APIs are powered by yet another agnostic library called Seroval. In order to send data between server and client in a secure manner, it all needs to be serialized. Seroval defines itself in an over-simplistic way: “Stringify JS Values.” However, this definition doesn’t address the fact it does so in an extremely powerful and fast fashion.

Thanks to Seroval, SolidStart is able to safely and efficiently cross the serialization boundary. Resource serialization is arguably the most important feature of a full-stack framework — without it, the back-end and front-end bridge simply won’t work in a smooth way.

Besides resource serialization, SolidStart can also use server actions. Straight from the documentation, this is how server actions look for us (note the "use server" directive that allows Vinxi to put the code in the correct place.

import { action, redirect } from "@solidjs/router";

const isAdmin = action(async (formData: FormData) => {
  "use server";
  await new Promise((resolve, reject) => setTimeout(resolve, 1000));
  const username = formData.get("username");
  if (username === "admin") throw redirect("/admin");
  return new Error("Invalid username");
});

export function MyComponent() {
  return (
    <form action={isAdmin} method="post">
      <label for="username">Username:</label>
      <input type="text" name="username" />
      <input type="submit" value="submit" />
    </form>
  );
}
Everything Comes Together

After this breakdown, things may still be a bit up in the air. So, let’s close the loop by assembling the parts:

Hopefully, this little exercise of pulling the framework apart and putting the pieces back together was interesting and insightful. Let me know in the comments below or on X if this has helped you better understand or even choose the tools for your next project.

Final Considerations

This article would not have been possible without the technical help from my awesome folks at Solid’s team: Dave Di Biase, Alexis Munsayac, Alex Lohr, Daniel Afonso, and Nikhil Saraf. Thank you for your reviews, insights, and overall making me sound cleverer!

]]>
hello@smashingmagazine.com (Atila Fassina)
<![CDATA[The View Transitions API And Delightful UI Animations (Part 2)]]> https://smashingmagazine.com/2024/01/view-transitions-api-ui-animations-part2/ https://smashingmagazine.com/2024/01/view-transitions-api-ui-animations-part2/ Tue, 02 Jan 2024 10:00:00 GMT Last time we met, I introduced you to the View Transitions API. We started with a simple default crossfade transition and applied it to different use cases involving elements on a page transitioning between two states. One of those examples took the basic idea of adding products to a shopping cart on an e-commerce site and creating a visual transition that indicates an item added to the cart.

The View Transitions API is still considered an experimental feature that’s currently supported only in Chrome at the time I’m writing this, but I’m providing that demo below as well as a video if your browser is unable to support the API.

Those diagrams illustrate (1) the origin page, (2) the destination page, (3) the type of transition, and (4) the transition elements. The following is a closer look at the transition elements, i.e., the elements that receive the transition and are tracked by the API.

So, what we’re working with are two transition elements: a header and a card component. We will configure those together one at a time.

Header Transition Elements

The default crossfade transition between the pages has already been set, so let’s start by registering the header as a transition element by assigning it a view-transition-name. First, let’s take a peek at the HTML:

<div class="header__wrapper">
  <!-- Link back arrow -->
  <a class="header__link header__link--dynamic" href="/">
    <svg ...><!-- ... --></svg>
  </a>
  <!-- Page title -->
  <h1 class="header__title">
    <a href="/" class="header__link-logo">
      <span class="header__logo--deco">Vinyl</span>Emporium </a>
  </h1>
  <!-- ... -->
</div>

When the user navigates between the homepage and an item details page, the arrow in the header appears and disappears — depending on which direction we’re moving — while the title moves slightly to the right. We can use display: none to handle the visibility.

/* Hide back arrow on the homepage */
.home .header__link--dynamic {
    display: none;
}

We’re actually registering two transition elements within the header: the arrow (.header__link--dynamic) and the title (.header__title). We use the view-transition-name property on both of them to define the names we want to call those elements in the transition:

@supports (view-transition-name: none) {
  .header__link--dynamic {
    view-transition-name: header-link;
  }
  .header__title {
    view-transition-name: header-title;
  }
}

Note how we’re wrapping all of this in a CSS @supports query so it is scoped to browsers that actually support the View Transitions API. So far, so good!

To do that, let’s start by defining our transition elements and assign transition names to the elements we’re transitioning between the product image (.product__image--deco) and the product disc behind the image (.product__media::before).

@supports (view-transition-name: none) {
  .product__image--deco {
    view-transition-name: product-lp;
  }
 .product__media::before {
    view-transition-name: flap;
  }
  ::view-transition-group(product-lp) {
    animation-duration: 0.25s;
    animation-timing-function: ease-in;
  }
  ::view-transition-old(product-lp),
  ::view-transition-new(product-lp) {
    /* Removed the crossfade animation */
    mix-blend-mode: normal;
    animation: none;
  }
}

Notice how we had to remove the crossfade animation from the product image’s old (::view-transition-old(product-lp)) and new (::view-transition-new(product-lp)) states. So, for now, at least, the album disc changes instantly the moment it’s positioned back behind the album image.

But doing this messed up the transition between our global header navigation and product details pages. Navigating from the item details page back to the homepage results in the album disc remaining visible until the view transition finishes rather than running when we need it to.

Let’s configure the router to match that structure. Each route gets a loader function to handle page data.

import { createBrowserRouter, RouterProvider } from "react-router-dom";
import Category, { loader as categoryLoader } from "./pages/Category";
import Details, { loader as detailsLoader } from "./pages/Details";
import Layout from "./components/Layout";

/* Other imports */

const router = createBrowserRouter([
  {
    /* Shared layout for all routes */
    element: <Layout />,
    children: [
      {
        /* Homepage is going to load a default (first) category */
        path: "/",
        element: <Category />,
        loader: categoryLoader,
      },
      {
      /* Other categories */
        path: "/:category",
        element: <Category />,
        loader: categoryLoader,
      },
      {
        /* Item details page */
        path: "/:category/product/:slug",
        element: <Details />,
        loader: detailsLoader,
      },
    ],
  },
]);

const root = ReactDOM.createRoot(document.getElementById("root"));
root.render(
  <React.StrictMode>
    <RouterProvider router={router} />
  </React.StrictMode>
);

With this, we have established the routing structure for the app:

  • Homepage (/);
  • Category page (/:category);
  • Product details page (/:category/product/:slug).

And depending on which route we are on, the app renders a Layout component. That’s all we need as far as setting up the routes that we’ll use to transition between views. Now, we can start working on our first transition: between two category pages.

Transition Between Category Pages

We’ll start by implementing the transition between category pages. The transition performs a crossfade animation between views. The only part of the UI that does not participate in the transition is the bottom border of the category filter menu, which provides a visual indication for the active category filter and moves between the formerly active category filter and the currently active category filter that we will eventually register as a transition element.

Since we’re using react-router, we get its web-based routing solution, react-router-dom, baked right in, giving us access to the DOM bindings — or router components we need to keep the UI in sync with the current route as well as a component for navigational links. That’s also where we gain access to the View Transitions API implementation.

Specifically, we will use the component for navigation links (Link) with the unstable_viewTransition prop that tells the react-router to run the View Transitions API when switching page contents.

import { Link, useLocation } from "react-router-dom";
/* Other imports */

const NavLink = ({ slug, title, id }) => {
  const { pathname } = useLocation();
  /* Check if the current nav link is active */
  const isMatch = slug === "/" ? pathname === "/" : pathname.includes(slug);
  return (
    <li key={id}>
      <Link
        className={isMatch ? "nav__link nav__link--current" : "nav__link"}
        to={slug}
        unstable_viewTransition
      >
        {title}
      </Link>
    </li>
  );
};

const Nav = () => {
  return 
    <nav className={"nav"}>
      <ul className="nav__list">
        {categories.items.map((item) => (
          <NavLink {...item} />
        ))}
      </ul>
    </nav>
  );
};

That is literally all we need to register and run the default crossfading view transition! That’s again because react-router-dom is giving us access to the View Transitions API and does the heavy lifting to abstract the process of setting transitions on elements and views.

Creating The Transition Elements

We only have one UI element that gets its own transition and a name for it, and that’s the visual indicator for the actively selected product category filter in the app’s navigation. While the app transitions between category views, it runs another transition on the active indicator that moves its position from the origin category to the destination category.

I know that I had earlier described that visual indicator as a bottom border, but we’re actually going to establish it as a standard HTML horizontal rule (<hr>) element and conditionally render it depending on the current route. So, basically, the <hr> element is fully removed from the DOM when a view transition is triggered, and we re-render it in the DOM under whatever NavLink component represents the current route.

We want this transition only to run if the navigation is visible, so we’ll use the react-intersection-observer helper to check if the element is visible and, if it is, assign it a viewTransitionName in an inline style.

import { useInView } from "react-intersection-observer";
/* Other imports */

const NavLink = ({ slug, title, id }) => {
  const { pathname } = useLocation();
  const isMatch = slug === "/" ? pathname === "/" : pathname.includes(slug);
  return (
    <li key={id}>
      <Link
        ref={ref}
        className={isMatch ? "nav__link nav__link--current" : "nav__link"}
        to={slug}
        unstable_viewTransition
      >
        {title}
      </Link>
      {isMatch && (
        <hr
          style={{
            viewTransitionName: inView ? "marker" : "",
          }}
          className="nav__marker"
        />
      )}
    </li>
  );
};

First, let’s take a look at our Card component used in the category views. Once again, react-router-dom makes our job relatively easy, thanks to the unstable_useViewTransitionState hook. The hook accepts a URL string and returns true if there is an active page transition to the target URL, as well as if the transition is using the View Transitions API.

That’s how we’ll make sure that our active image remains a transition element when navigating between a category view and a product view.

import { Link, unstable_useViewTransitionState } from "react-router-dom";
/* Other imports */

const Card = ({ author, category, slug, id, title }) => {
  /* We'll use the same URL value for the Link and the hook */
  const url = /${category}/product/${slug};

  /* Check if the transition is running for the item details pageURL */
  const isTransitioning = unstable_useViewTransitionState(url);

  return (
    <li className="card">
      <Link unstable_viewTransition to={url} className="card__link">
        <figure className="card__figure">
          <img
            className="card__image"
            style=}}
              /* Apply the viewTransitionName if the card has been clicked on */
              viewTransitionName: isTransitioning ? "item-image" : "",
            }}
            src={/assets/$&#123;category&#125;/${id}-min.jpg}
            alt=""
          />
         {/* ... */}
        </figure>
        <div className="card__deco" />
      </Link>
    </li>
  );
};

export default Card;

We know which image in the product view is the transition element, so we can apply the viewTransitionName directly to it rather than having to guess:

import {
  Link,
  useLoaderData,
  unstable_useViewTransitionState,
} from "react-router-dom";
/* Other imports */

const Details = () => {
  const data = useLoaderData();
  const { id, category, title, author } = data;
  return (
    <>
      <section className="item">
        {/* ... */}
        <article className="item__layout">
          <div>
              <img
                style={{viewTransitionName: "item-image"}}
                className="item__image"
                src={/assets/${category}/${id}-min.jpg}
                alt=""
              />
          </div>
          {/* ... */}
        </article>
      </section>
    </>
  );
};

export default Details;

We’re on a good track but have two issues that we need to tackle before moving on to the final transitions.

One is that the Card component’s image (.card__image) contains some CSS that applies a fixed one-to-one aspect ratio and centering for maintaining consistent dimensions no matter what image file is used. Once the user clicks on the Card — the .card-image in a category view — it becomes an .item-image in the product view and should transition into its original state, devoid of those extra styles.


/* Card component image */
.card__image {
  object-fit: cover;
  object-position: 50% 50%;
  aspect-ratio: 1;
  /* ... */
}

/* Product view image */
.item__image {
 /* No aspect-ratio applied */
 /* ... */
}

Jake has recommended using React’s flushSync function to make this work. The function forces synchronous and immediate DOM updates inside a given callback. It’s meant to be used sparingly, but it’s okay to use it for running the View Transition API as the target component re-renders.

// Assigns view-transition-name to the image before transition runs
const [isImageTransition, setIsImageTransition] = React.useState(false);

// Applies fixed-positioning and full-width image styles as transition runs
const [isFullImage, setIsFullImage] = React.useState(false);

/* ... */

// State update function, which triggers the DOM update we want to animate
const toggleImageState = () => setIsFullImage((state) => !state);

// Click handler function - toggles both states.
const handleZoom = async () => {
  // Run API only if available.
  if (document.startViewTransition) {
    // Set image as a transition element.
    setIsImageTransition(true);
    const transition = document.startViewTransition(() => {
      // Apply DOM updates and force immediate re-render while.
      // View Transitions API is running.
      flushSync(toggleImageState);
    });
    await transition.finished;
    // Cleanup
    setIsImageTransition(false);
  } else {
    // Fallback 
    toggleImageState();
  }
};

/* ... */

With this in place, all we really have to do now is toggle class names and view transition names depending on the state we defined in the previous code.

import React from "react";
import { flushSync } from "react-dom";

/* Other imports */

const Details = () => {
  /* React state, click handlers, util functions... */

  return (
    <>
      <section className="item">
        {/* ... */}
        <article className="item__layout">
          <div>
            <button onClick={handleZoom} className="item__toggle">
              <img
                style={{
                  viewTransitionName:
                    isTransitioning || isImageTransition ? "item-image" : "",
                }}
                className={
                  isFullImage
                    ? "item__image item__image--active"
                    : "item__image"
                }
                src={/assets/${category}/${id}-min.jpg}
                alt=""
              />
            </button>
          </div>
          {/* ... */}
        </article>
      </section>
      <aside
        className={
          isFullImage ? "item__overlay item__overlay--active" : "item__overlay"
        }
      />
    </>
  );
};

We are applying viewTransitionName directly on the image’s style attribute. We could have used boolean variables to toggle a CSS class and set a view-transition-name in CSS instead. The only reason I went with inline styles is to show both approaches in these examples. You can use whichever approach fits your project!

Let’s round this out by refining styles for the overlay that sits behind the image when it is expanded:

.item__overlay--active {
  z-index: 2;
  display: block;
  background: rgba(0, 0, 0, 0.5);
  position: fixed;
  top: 0;
  left: 0;
  width: 100vw;
  height: 100vh;
}

.item__image--active {
  cursor: zoom-out;
  position: absolute;
  z-index: 9;
  top: 50%;
  left: 50%;
  transform: translate3d(-50%, -50%, 0);
  max-width: calc(100vw - 4rem);
  max-height: calc(100vh - 4rem);
}

Demo

The following demonstrates only the code that is directly relevant to the View Transitions API so that it is easier to inspect and use. If you want access to the full code, feel free to get it in this GitHub repo.

Conclusion

We did a lot of work with the View Transitions API in the second half of this brief two-part article series. Together, we implemented full-view transitions in two different contexts, one in a more traditional multi-page application (i.e., website) and another in a single-page application using React.

We started with transitions in a MPA because the process requires fewer dependencies than working with a framework in a SPA. We were able to set the default crossfade transition between two pages — a category page and a product page — and, in the process, we learned how to set view transition names on elements after the transition runs to prevent naming conflicts.

From there, we applied the same concept in a SPA, that is, an application that contains one page but many views. We took a React app for a “Museum of Digital Wonders” and applied transitions between full views, such as navigating between a category view and a product view. We got to see how react-router — and, by extension, react-router-dom — is used to define transitions bound to specific routes. We used it not only to set a crossfade transition between category views and between category and product views but also to set a view transition name on UI elements that also transition in the process.

The View Transitions API is powerful, and I hope you see that after reading this series and following along with the examples we covered together. What used to take a hefty amount of JavaScript is now a somewhat trivial task, and the result is a smoother user experience that irons out the process of moving from one page or view to another.

That said, the View Transitions API’s power and simplicity need the same level of care and consideration for accessibility as any other transition or animation on the web. That includes things like being mindful of user motion preferences and resisting the temptation to put transitions on everything. There’s a fine balance that comes with making accessible interfaces, and motion is certainly included.

References

]]>
hello@smashingmagazine.com (Adrian Bece)
<![CDATA[The Magic Of New Beginnings (January 2024 Wallpapers Edition)]]> https://smashingmagazine.com/2023/12/desktop-wallpaper-calendars-january-2024/ https://smashingmagazine.com/2023/12/desktop-wallpaper-calendars-january-2024/ Sun, 31 Dec 2023 08:00:00 GMT Maybe 2024 has already started as you’re reading this, maybe you’re still waiting for the big countdown to begin. Either way, it’s never too late or too early for some New Year’s inspiration!

More than twelve years ago, we started our monthly wallpapers series to bring you a new collection of beautiful, unique, and inspiring desktop wallpapers every month. And, of course, this month is no exception.

In this post, you’ll find January wallpapers to accompany you through your first adventures of the new year, to make you smile, and to cater for some happy pops of color on a cold and dark winter day. The wallpapers are created with love by artists and designers from across the globe and can be downloaded for free. Thank you to everyone who shared their designs with us — you are truly smashing! Have a happy and healthy 2024! ❤️

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

Cheerful Chimes City

Designed by Design Studio from India.

A Fresh Start

Designed by Ricardo Gimenes from Sweden.

Dots ’n’ Roll

“Let’s turn on 2024 with round style! Wishing all the best to everyone.” — Designed by Philippe Brouard from France.

Seal-abrating The New Year

“As January’s calendar unfurls amidst snow-kissed trees, let’s embrace the seal’s example - spreading joy and warmth. Here’s to a new year filled with hope and happiness! Happy New Year!” — Designed by PopArt Studio from Serbia.

AI

Designed by Ricardo Gimenes from Sweden.

Fishing With Penguins

“To begin with, and since we are in January, we are going to go towards the poles. We sat on an iceberg to fish, while we enjoyed the company of our penguin friends. We hope this is a great year and that you share it with those and things that make you happy. Happy new year!” — Designed by Veronica Valenzuela Jimenez from Spain.

Bird Bird Bird Bird

“Just four birds, ready for winter.” — Designed by Vlad Gerasimov from Georgia.

Open The Doors Of The New Year

“January is the first month of the year and usually the coldest winter month in the Northern hemisphere. The name of the month of January comes from ‘ianua’, the Latin word for door, so this month denotes the door to the new year and a new beginning. Let’s open the doors of the new year together and hope it will be the best so far!” — Designed by PopArt Studio from Serbia.

Winter Leaves

Designed by Nathalie Ouederni from France.

Start Somewhere

“If we wait until we’re ready, we’ll be waiting for the rest of our lives. Start today — somewhere, anywhere.” — Designed by Shawna Armstrong from the United States.

Boom!

Designed by Elise Vanoorbeek from Belgium.

Be Awesome Today

“A little daily motivation to keep your cool during the month of January.” — Designed by Amalia Van Bloom from the United States.

Happy Hot Tea Month

“You wake me up to a beautiful day; lift my spirit when I’m feeling blue. When I’m home you relieve me of the long day’s stress. You help me have a good time with my loved ones; give me company when I’m all alone. You’re none other than my favourite cup of hot tea.” — Designed by Acodez IT Solutions from India.

Peaceful Mountains

“When all the festivities are over, all we want is some peace and rest. That’s why I made this simple flat art wallpaper with peaceful colors.” — Designed by Jens Gilis from Belgium.

New Year’s Magic

“After a hard but also enchanting night of delivering presents, Santa Claus sometimes happens to drop a few presents, so that some animals get them as well. Enjoy the holidays and gifts you have received and given to your loved ones!” — Designed by LibraFire from Serbia.

Cold… Penguins!

“The new year is here! We waited for it like penguins. We look at the snow and enjoy it! — Designed by Veronica Valenzuela from Spain.

Snowman

Designed by Ricardo Gimenes from Sweden.

A New Beginning

“I wanted to do a lettering-based wallpaper because I love lettering. I chose January because for a lot of people the new year is perceived as a new beginning and I wish to make them feel as positive about it as possible! The idea is to make them feel like the new year is (just) the start of something really great.” — Designed by Carolina Sequeira from Portugal.

January Fish

“My fish tank at home inspired me to make a wallpaper with a fish.” — Designed by Arno De Decker from Belgium.

Freedom

“It is great to take shots of birds and think about the freedom they have. Then I start dreaming of becoming one and flying around the world with their beautiful wings.” — Designed by Marija Zaric from Belgrade, Serbia.

Dare To Be You

“The new year brings new opportunities for each of us to become our true selves. I think that no matter what you are — like this little monster — you should dare to be the true you without caring what others may think. Happy New Year!” — Designed by Maria Keller from Mexico.

Yogabear

Designed by Ricardo Gimenes from Sweden.

Winter Getaway

“What could be better, than a change of scene for a week? Even if you are too busy, just think about it.” — Designed by Igor Izhik from Canada.

Angel In Snow

Designed by Brainer from Ukraine.

Snowy Octopus

Designed by Karolina Palka from Poland

Rest Up For The New Year

“I was browsing for themes when I found this “Festival of Sleep” that takes place on the 3rd, and I’m a big fan of sleep… Especially in these cold months after the holiday craziness, it’s nice to get cozy and take a nice nap.” — Designed by Dorothy Timmer from Central Florida, USA.

Oaken January

“In our country, Christmas is celebrated in January when oak branches and leaves are burnt to symbolize the beginning of the new year and new life. It’s the time when we gather with our families and celebrate the arrival of the new year in a warm and cuddly atmosphere.” — Designed by PopArt Studio from Serbia.

Rubber Ducky Day

“Winter can be such a gloomy time of the year. The sun sets earlier, the wind feels colder, and our heating bills skyrocket. I hope to brighten up your month with my wallpaper for Rubber Ducky Day!” — Designed by Ilya Plyusnin from Belgium.

The Early January Bird

“January is the month of a new beginning, hope and inspiration. That’s why it reminds me of an early bird.” — Designed by Zlatina Petrova from Bulgaria.

New Year’s Resolution

Designed by Elise Vanoorbeek from Belgium.

White Mountains

“Designed by Annika Oeser from Germany.

Oh Deer!

“Because deers are majestic creatures.” — Designed by Maxim Vanheertum from Belgium.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Making Sense Of “Senseless” JavaScript Features]]> https://smashingmagazine.com/2023/12/making-sense-of-senseless-javascript-features/ https://smashingmagazine.com/2023/12/making-sense-of-senseless-javascript-features/ Thu, 28 Dec 2023 10:00:00 GMT Why does JavaScript have so many eccentricities!? Like, why does 0.2 + 0.1 equals 0.30000000000000004? Or, why does "" == false evaluate to true?

There are a lot of mind-boggling decisions in JavaScript that seem pointless; some are misunderstood, while others are direct missteps in the design. Regardless, it’s worth knowing what these strange things are and why they are in the language. I’ll share what I believe are some of the quirkiest things about JavaScript and make sense of them.

0.1 + 0.2 And The Floating Point Format

Many of us have mocked JavaScript by writing 0.1 + 0.2 in the console and watching it resoundingly fail to get 0.3, but rather a funny-looking 0.30000000000000004 value.

What many developers might not know is that the weird result is not really JavaScript’s fault! JavaScript is merely adhering to the IEEE Standard for Floating-Point Arithmetic that nearly every other computer and programming language uses to represent numbers.

But what exactly is the Floating-Point Arithmetic?

Computers have to represent numbers in all sizes, from the distance between planets and even between atoms. On paper, it’s easy to write a massive number or a minuscule quantity without worrying about the size it will take. Computers don’t have that luxury since they have to save all kinds of numbers in binary and a small space in memory.

Take an 8-bit integer, for example. In binary, it can hold integers ranging from 0 to 255.

The keyword here is integers. It can’t represent any decimals between them. To fix this, we could add an imaginary decimal point somewhere along our 8-bit so the bits before the point are used to represent the integer part and the rest are used for the decimal part. Since the point is always in the same imaginary spot, it’s called a fixed point decimal. But it comes with a great cost since the range is reduced from 0 to 255 to exactly 0 to 15.9375.

Having greater precision means sacrificing range, and vice versa. We also have to take into consideration that computers need to please a large number of users with different requirements. An engineer building a bridge doesn’t worry too much if the measurements are off by just a little, say a hundredth of a centimeter. But, on the other hand, that same hundredth of a centimeter can end up costing much more for someone making a microchip. The precision that’s needed is different, and the consequences of a mistake can vary.

Another consideration is the size where numbers are stored in memory since storing long numbers in something like a megabyte isn’t feasible.

The floating-point format was born from this need to represent both large and small quantities with precision and efficiency. It does so in three parts:

  1. A single bit that represents whether or not the number is positive or negative (0 for positive, 1 for negative).
  2. A significand or mantissa that contains the number’s digits.
  3. An exponent specifies where the decimal (or binary) point is placed relative to the beginning of the mantissa, similar to how scientific notation works. Consequently, the point can move around to any position, hence the floating point.

An 8-bit floating-point format can represent numbers between 0.0078 to 480 (and its negatives), but notice that the floating-point representation can’t represent all of the numbers in that range. It’s impossible since 8 bits can represent only 256 distinct values. Inevitably, many numbers cannot be accurately represented. There are gaps along the range. Computers, of course, work with more bits to increase accuracy and range, commonly with 32-bits and 64-bits, but it’s impossible to represent all numbers accurately, a small price to pay if we consider the range we gain and the memory we save.

The exact dynamics are far more complex, but for now, we only have to understand that while this format allows us to express numbers in a large range, it loses precision (the gaps between representable values get bigger) when they become too big. For example, JavaScript numbers are presented in a double-precision floating-point format, i.e., each number is represented in 64 bits in memory, leaving 53 bits to represent the mantissa. That means JavaScript can only safely represent integers between –(253 — 1) and 253 — 1 without losing precision. Beyond that, the arithmetic stops making sense. That’s why we have the Number.MAX_SAFE_INTEGER static data property to represent the maximum safe integer in JavaScript, which is (253 — 1) or 9007199254740991.

But 0.3 is obviously below the MAX_SAFE_INTEGER threshold, so why can’t we get it when adding 0.1 and 0.2? The floating-point format struggles with some fractional numbers. It isn’t a problem with the floating-point format, but it certainly is across any number system.

To see this, let’s represent one-third (1/3) in base-10.

0.3
0.33
0.3333333 [...]

No matter how many digits we try to write, the result will never be exactly one-third. In the same way, we cannot accurately represent some fractional numbers in base-2 or binary. Take, for example, 0.2. We can write it with no problem in base-10, but if we try to write it in binary we get a recurring 1001 at the end that repeats infinitely.

0.001 1001 1001 1001 1001 1001 10 [...]

We obviously can’t have an infinitely large number, so at some point, the mantissa has to be truncated, making it impossible not to lose precision in the process. If we try to convert 0.2 from double-precision floating-point back to base-10, we will see the actual value saved in memory:

0.200000000000000011102230246251565404236316680908203125

It isn’t 0.2! We cannot represent an awful lot of fractional values — not only in JavaScript but in almost all computers. So why does running 0.2 + 0.2 correctly compute 0.4? In this case, the imprecision is so small that it gets rounded by Javascript (at the 16th decimal), but sometimes the imprecision is enough to escape the rounding mechanism, as is the case with 0.2 + 0.1. We can see what’s happening under the hood if we try to sum the actual values of 0.1 and 0.2.

This is the actual value saved when writing 0.1:

0.1000000000000000055511151231257827021181583404541015625

If we manually sum up the actual values of 0.1 and 0.2, we will see the culprit:

0.3000000000000000444089209850062616169452667236328125

That value is rounded to 0.30000000000000004. You can check the real values saved at float.exposed.

Floating-point has its known flaws, but its positives outweigh them, and it’s standard around the world. In that sense, it’s actually a relief when all modern systems will give us the same 0.30000000000000004 result across architectures. It might not be the result you expect, but it’s a result you can predict.

Type Coercion

JavaScript is a dynamically typed language, meaning we don’t have to declare a variable’s type, and it can be changed later in the code.

I find dynamically typed languages liberating since we can focus more on the substance of the code.

The issue comes from being weakly typed since there are many occasions where the language will try to do an implicit conversion between different types, e.g., from strings to numbers or falsy and truthy values. This is specifically true when using the equality ( ==) and plus sign (+) operators. The rules for type coercion are intricate, hard to remember, and even incorrect in certain situations. It’s better to avoid using == and always prefer the strict equality operator (===).

For example, JavaScript will coerce a string to a number when compared with another number:

console.log("2" == 2); // true

The inverse applies to the plus sign operator (+). It will try to coerce a number into a string when possible:

console.log(2 + "2"); // "22"

That’s why we should only use the plus sign operator (+) if we are sure that the values are numbers. When concatenating strings, it’s better to use the concat() method or template literals.

The reason such coercions are in the language is actually absurd. When JavaScript creator Brendan Eich was asked what he would have done differently in JavaScript’s design, his answer was to be more meticulous in the implementations early users of the language wanted:

“I would have avoided some of the compromises that I made when I first got early adopters, and they said, “Can you change this?”

— Brendan Eich

The most glaring example is the reason why we have two equality operators, == and ===. When an early JavaScript user prompted his need to compare a number to a string without having to change his code to make a conversion, Brendan added the loose equality operator to satisfy those needs.

There are a lot of other rules governing the loose equality operator (and other statements checking for a condition) that make JavaScript developers scratch their heads. They are complex, tedious, and senseless, so we should avoid the loose equality operator (==) at all costs and replace it with its strict homonym (===).

Why do we have two equality operators in the first place? A lot of factors, but we can point a finger at Guy L. Steele, co-creator of the Scheme programming language. He assured Eich that we could always add another equality operator since there were dialects with five distinct equality operators in the Lisp language! This mentality is dangerous, and nowadays, all features have to be rigorously analyzed because we can always add new features, but once they are in the language, they cannot be removed.

Automatic Semicolon Insertion

When writing code in JavaScript, a semicolon (;) is required at the end of some statements, including:

  • var, let, const;
  • Expression statements;
  • do...while;
  • continue, break, return, throw;
  • debugger;
  • Class field declarations (public or private);
  • import, export.

That said, we don’t necessarily have to insert a semicolon every time since JavaScript can automatically insert semicolons in a process unsurprisingly known as Automatic Semicolon Insertion (ASI). It was intended to make coding easier for beginners who didn’t know where a semicolon was needed, but it isn’t a reliable feature, and we should stick to explicitly typing where a semicolon goes. Linters and formatters add a semicolon where ASI would, but they aren’t completely reliable either.

ASI can make some code work, but most of the time it doesn’t. Take the following code:

const a = 1
(1).toString()

const b = 1
[1, 2, 3].forEach(console.log)

You can probably see where the semicolons go, and if we formatted it correctly, it would end up as:

const a = 1;

(1).toString();

const b = 1;

[(1, 2, 3)].forEach(console.log);

But if we feed the prior code directly to JavaScript, all kinds of exceptions would be thrown since it would be the same as writing this:

const a = 1(1).toString();

const b = (1)[(1, 2, 3)].forEach(console.log);

In conclusion, know your semicolons.

Why So Many Bottom Values?

The term “bottom” is often used to represent a value that does not exist or is undefined. But why do we have two kinds of bottom values in JavaScript?

Everything in JavaScript can be considered an object, except the two bottom values null and undefined (despite typeof null returning object). Attempting to get a property value from them raises an exception.

Note that, strictly speaking, all primitive values aren’t objects. But only null and undefined aren’t subjected to boxing.

We can even think of NaN as a third bottom value that represents the absence of a number. The abundance of bottom values should be regarded as a design error. There isn’t a straightforward reason that explains the existence of two bottom values, but we can see a difference in how JavaScript employs them.

undefined is the bottom value that JavaScript uses by default, so it’s considered good practice to use it exclusively in your code. When we define a variable without an initial value, attempting to retrieve it assigns the undefined value. The same thing happens when we try to access a non-existing property from an object. To match JavaScript’s behavior as closely as possible, use undefined to denote an existing property or variable that doesn’t have a value.

On the other hand, null is used to represent the absence of an object (hence, its typeof returns an object even though it isn’t). However, this is considered a design blunder because undefined could fulfill its purposes as effectively. It’s used by JavaScript to denote the end of a recursive data structure. More specifically, it’s used in the prototype chain to denote its end. Most of the time, you can use undefined over null, but there are some occasions where only null can be used, as is the case with Object.create in which we can only create an object without a prototype passing null; using undefined returns a TypeError.

null and undefined both suffer from the path problem. When trying to access a property from a bottom value — as if they were objects — exceptions are raised.

let user;

let userName = user.name; // Uncaught TypeError

let userNick = user.name.nick; // Uncaught TypeError

There is no way around this unless we check for each property value before trying to access the next one, either using the logical AND (&&) or optional chaining (?).

let user;

let userName = user?.name;

let userNick = user && user.name && user.name.nick;

console.log(userName); // undefined

console.log(userNick); // undefined

I said that NaN can be considered a bottom value, but it has its own confusing place in JavaScript since it represents numbers that aren’t actual numbers, usually due to a failed string-to-number conversion (which is another reason to avoid it). NaN has its own shenanigans because it isn’t equal to itself! To test if a value is NaN or not, use Number.isNaN().

We can check for all three bottom values with the following test:

function stringifyBottom(bottomValue) {
  if (bottomValue === undefined) {
    return "undefined";
  }

  if (bottomValue === null) {
    return "null";
  }

  if (Number.isNaN(bottomValue)) {
    return "NaN";
  }
}
Increment (++) And Decrement (--)

As developers, we tend to spend more time reading code rather than writing it. Whether we are reading documentation, reviewing someone else’s work, or checking our own, code readability will increase our productivity over brevity. In other words, readability saves time in the long run.

That’s why I prefer using + 1 or - 1 rather than the increment (++) and decrement (--) operators.

It’s illogical to have a different syntax exclusively for incrementing a value by one in addition to having a pre-increment form and a post-increment form, depending on where the operator is placed. It is very easy to get them reversed, and that can be difficult to debug. They shouldn’t have a place in your code or even in the language as a whole when we consider where the increment operators come from.

As we saw in a previous article, JavaScript syntax is heavily inspired by the C language, which uses pointer variables. Pointer variables were designed to store the memory addresses of other variables, enabling dynamic memory allocation and manipulation. The ++ and -- operators were originally crafted for the specific purpose of advancing or stepping back through memory locations.

Nowadays, pointer arithmetic has been proven harmful and can cause accidental access to memory locations beyond the intended boundaries of arrays or buffers, leading to memory errors, a notorious source of bugs and vulnerabilities. Regardless, the syntax made its way to JavaScript and remains there today.

While the use of ++ and -- remains a standard among developers, an argument for readability can be made. Opting for + 1 or - 1 over ++ and -- not only aligns with the principles of clarity and explicitness but also avoids having to deal with its pre-increment form and post-increment form.

Overall, it isn’t a life-or-death situation but a nice way to make your code more readable.

Conclusion

JavaScript’s seemingly senseless features often arise from historical decisions, compromises, and attempts to cater to all needs. Unfortunately, it’s impossible to make everyone happy, and JavaScript is no exception.

JavaScript doesn’t have the responsibility to accommodate all developers, but each developer has the responsibility to understand the language and embrace its strengths while being mindful of its quirks.

I hope you find it worth your while to keep learning more and more about JavaScript and its history to get a grasp of its misunderstood features and questionable decisions. Take its amazing prototypal nature, for example. It was obscured during development or blunders like the this keyword and its multipurpose behavior.

Either way, I encourage every developer to research and learn more about the language. And if you’re interested, I go a bit deeper into questionable areas of JavaScript’s design in another article published here on Smashing Magazine!

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[The View Transitions API And Delightful UI Animations (Part 1)]]> https://smashingmagazine.com/2023/12/view-transitions-api-ui-animations-part1/ https://smashingmagazine.com/2023/12/view-transitions-api-ui-animations-part1/ Fri, 22 Dec 2023 13:00:00 GMT Animations are an essential part of a website. They can draw attention, guide users on their journey, provide satisfying and meaningful feedback to interaction, add character and flair to make the website stand out, and so much more!

On top of that, CSS has provided us with transitions and keyframe-based animations since at least 2009. Not only that, the Web Animations API and JavaScript-based animation libraries, such as the popular GSAP, are widely used for building very complex and elaborate animations.

With all these avenues for making things move on the web, you might wonder where the View Transitions API fits in in all this. Consider the following example of a simple task list with three columns.

We’re merely crossfading between the two screen states, and that includes all elements within it (i.e., other images, cards, grid, and so on). The API is unaware that the image that is being moved from the container (old state) to the overlay (new state) is the same element.

We need to instruct the browser to pay special attention to the image element when switching between states. That way, we can create a special transition animation that is applied only to that element. The CSS view-transition-name property applies the name of the view transition we want to apply to the transitioning elements and instructs the browser to keep track of the transitioning element’s size and position while applying the transition.

We get to name the transition anything we want. Let’s go with active-image, which is going to be declared on a .gallery__image--active class that is a modifier of the class applied to images (.gallery-image) when the transition is in an active state:

.gallery__image--active {
  view-transition-name: active-image;
}

Note that view-transition-name has to be a unique identifier and applied to only a single rendered element during the animation. This is why we are applying the property to the active image element (.gallery__image--active). We can remove the class when the image overlay is closed, return the image to its original position, and be ready to apply the view transition to another image without worrying whether the view transition has already been applied to another element on the page.

So, we have an active class, .gallery__image--active, for images that receive the view transition. We need a method for applying that class to an image when the user clicks on that respective image. We can also wait for the animation to finish by storing the transition in a variable and calling await on the finished attribute to toggle off the class and clean up our work.

// Start the transition and save its instance in a variable
const transition = document.startViewTransition(() =&gtl /* ... */);

// Wait for the transition to finish.
await transition.finished;

/* Cleanup after transition has completed */

Let’s apply this to our example:

function toggleImageView(index) {
  const image = document.getElementById(js-gallery-image-${index});

  // Apply a CSS class that contains the view-transition-name before the animation starts.
  image.classList.add("gallery__image--active");

  const imageParentElement = image.parentElement;

  if (!document.startViewTransition) {
    // Fallback if View Transitions API is not supported.
    moveImageToModal(image);
  } else {
    // Start transition with the View Transitions API.
    document.startViewTransition(() => moveImageToModal(image));
  }

  // This click handler function is now async.
  overlayWrapper.onclick = async function () {
    // Fallback if View Transitions API is not supported.
    if (!document.startViewTransition) {
      moveImageToGrid(imageParentElement);
      return;
    }

    // Start transition with the View Transitions API.
    const transition = document.startViewTransition(() => moveImageToGrid(imageParentElement));

    // Wait for the animation to complete.
    await transition.finished;

    // Remove the class that contains the page-transition-tag after the animation ends.
    image.classList.remove("gallery__image--active");
  };
}

Alternatively, we could have used JavaScript to toggle the CSS view-transition-name property on the element in the inline HMTL. However, I would recommend keeping everything in CSS as you might want to use media queries and feature queries to create fallbacks and manage it all in one place.

// Applies view-transition-name to the image
image.style.viewTransitionName = "active-image";

// Removes view-transition-name from the image
image.style.viewTransitionName = "none";

And that’s pretty much it! Let’s take a look at our example (in Chrome) with the transition element applied.

Customizing Animation Duration And Easing In CSS

What we just looked at is what I would call the default experience for the View Transitions API. We can do so much more than a transition that crossfades between two states. Specifically, just as you might expect from something that resembles a CSS animation, we can configure a view transition’s duration and timing function.

In fact, the View Transitions API makes use of CSS animation properties, and we can use them to fully customize the transition’s behavior. The difference is what we declare them on. Remember, a view transition is not part of the DOM, so what is available for us to select in CSS if it isn’t there?

When we run the startViewTransition function, the API pauses rendering, captures the new state of the page, and constructs a pseudo-element tree:

::view-transition
└─ ::view-transition-group(root)
   └─ ::view-transition-image-pair(root)
      ├─ ::view-transition-old(root)
      └─ ::view-transition-new(root)

Each one is helpful for customizing different parts of the transition:

  • ::view-transition: This is the root element, which you can consider the transition’s body element. The difference is that this pseudo-element is contained in an overlay that sits on top of everything else on the top.
    • ::view-transition-group: This mirrors the size and position between the old and new states.
      • ::view-transition-image-pair: This is the only child of ::view-transition-group, providing a container that isolates the blending work between the snapshots of the old and new transition states, which are direct children.
        • ::view-transition-old(...): A snapshot of the “old” transition state.
        • ::view-transition-new(...): A live representation of the new transition state.

Yes, there are quite a few moving parts! But the purpose of it is to give us tons of flexibility as far as selecting specific pieces of the transition.

So, remember when we applied view-transition-name: active-image to the .gallery__image--active class? Behind the scenes, the following pseudo-element tree is generated, and we can use the pseudo-elements to target either the active-image transition element or other elements on the page with the root value.

::view-transition
├─ ::view-transition-group(root)
│  └─ ::view-transition-image-pair(root)
│     ├─ ::view-transition-old(root)
│     └─ ::view-transition-new(root)
└─ ::view-transition-group(active-image)
   └─ ::view-transition-image-pair(active-image)
      ├─ ::view-transition-old(active-image)
      └─ ::view-transition-new(active-image)

In our example, we want to modify both the cross-fade (root) and transition element (active-image ) animations. We can use the universal selector (*) with the pseudo-element to change animation properties for all available transition elements and target pseudo-elements for specific animations using the page-transition-tag value.

/* Apply these styles only if API is supported */
@supports (view-transition-name: none) {
  /* Cross-fade animation */
  ::view-transition-image-pair(root) {
    animation-duration: 400ms;
    animation-timing-function: ease-in-out;
  }

  /* Image size and position animation */
  ::view-transition-group(active-image) {
    animation-timing-function: cubic-bezier(0.215, 0.61, 0.355, 1);
  }
}

Accessible Animations

Of course, any time we talk about movement on the web, we also ought to be mindful of users with motion sensitivities and ensure that we account for an experience that reduces motion.

That’s what the CSS prefers-reduced-motion query is designed for! With it, we can sniff out users who have enabled accessibility settings at the OS level that reduce motion and then reduce motion on our end of the work. The following example is a heavy-handed solution that nukes all animation in those instances, but it’s worth calling out that reduced motion does not always mean no motion. So, while this code will work, it may not be the best choice for your project, and your mileage may vary.

@media (prefers-reduced-motion) {
  ::view-transition-group(*),
  ::view-transition-old(*),
  ::view-transition-new(*) {
    animation: none !important;
  }
}

Final Demo

Here is the completed demo with fallbacks and prefers-reduced-motion snippet implemented. Feel free to play around with easings and timings and further customize the animations.

This is a perfect example of how the View Transitions API tracks an element’s position and dimensions during animation and transitions between the old and new snapshots right out of the box!

See the Pen Add to cart animation v2 - completed [forked] by Adrian Bece.

Conclusion

It amazes me every time how the View Transitions API turns expensive-looking animations into somewhat trivial tasks with only a few lines of code. When done correctly, animations can breathe life into any project and offer a more delightful and memorable user experience.

That all being said, we still need to be careful how we use and implement animations. For starters, we’re still talking about a feature that is supported only in Chrome at the time of this writing. But with Safari’s positive stance on it and an open ticket to implement it in Firefox, there’s plenty of hope that we’ll get broader support — we just don’t know when.

Also, the View Transitions API may be “easy,” but it does not save us from ourselves. Think of things like slow or repetitive animations, needlessly complex animations, serving animations to those who prefer reduced motion, among other poor practices. Adhering to animation best practices has never been more important. The goal is to ensure that we’re using view transitions in ways that add delight and are inclusive rather than slapping them everywhere for the sake of showing off.

In another article to follow this one, we’ll use View Transitions API to create full-page transitions in our single-page and multi-page applications — you know, the sort of transitions we see when navigating between two views in a native mobile app. Now, we have those readily available for the web, too!

Until then, go build something awesome… and use it to experiment with the View Transitions API!

References

]]>
hello@smashingmagazine.com (Adrian Bece)
<![CDATA[New CSS Viewport Units Do Not Solve The Classic Scrollbar Problem]]> https://smashingmagazine.com/2023/12/new-css-viewport-units-not-solve-classic-scrollbar-problem/ https://smashingmagazine.com/2023/12/new-css-viewport-units-not-solve-classic-scrollbar-problem/ Wed, 20 Dec 2023 10:00:00 GMT Browsers shipped a new set of CSS viewport units in 2022. These units make it easier to size elements in mobile browsers, where the browser’s retractable UI affects the height of the viewport as the user scrolls the page. Unfortunately, the new units do not make it easier to size elements in desktop browsers, where classic scrollbars affect the width and height of the viewport.

The following video shows a desktop browser with classic scrollbars. As we resize the viewport (dashed line) in different ways, the CSS length 100dvw matches the width of the viewport in all situations except when a vertical classic scrollbar is present on the page. In that case, 100dvw is larger than the viewport width. This is the classic scrollbar problem of CSS viewport units. When the page has a vertical classic scrollbar, the length 100dvw is larger than the viewport width. In fact, all viewport units have this problem.

In desktop browsers, the size of the viewport can change as well (e.g., when the user resizes the browser window, opens the browser’s sidebar, or zooms the page), but there is no separate “small viewport size” and “large viewport size” like in mobile browsers.

So far, I’ve only talked about the “viewport,” but there are, in fact, two different viewports in web browsers: the visual viewport and the layout viewport. When the page initially loads in the browser, the visual viewport and the layout viewport have the exact same size and position. The two viewports diverge in the following two cases:

  1. When the user zooms in on a part of the page via a pinch-to-zoom or double-tap gesture, the part of the page that is visible on the screen is the visual viewport. The size of the visual viewport (in CSS pixels) decreases because it shows a smaller part of the page. The size of the layout viewport has not changed.
  2. When the browser’s virtual keyboard appears on mobile platforms, the smaller part of the page that is visible on the screen above the keyboard is once again the visual viewport. The height of the visual viewport decreases with the height of the virtual keyboard. The size of the layout viewport has again not changed.

It’s worth noting that as part of shipping the new viewport units in 2022, Chrome stopped resizing the layout viewport and initial containing block (ICB) when the virtual keyboard is shown. This behavior is considered the “the best default”, and it ensures that the new viewport units are consistent across browsers. This change also made the mobile web feel less janky because resizing the ICB is a costly operation. However, the virtual keyboard may still resize the layout viewport in some mobile browsers.

In these two cases, the visual viewport continues to be “the rectangular section of the web browser in which the web page is rendered,” while the layout viewport becomes a larger rectangle that is only partially visible on the screen and that completely encloses the visual viewport. In all other situations, both viewports have the same size and position.

One benefit of the two-viewport system is that when the user pinch-zooms and pans around the page, fixed-positioned elements don’t stick to the screen, which would almost always be a bad experience. That being said, there are valid use cases for positioning an element above the virtual keyboard (e.g., a floating action button). The CSS Working Group is currently discussing how to make this possible in CSS.

CSS viewport units are based on the layout viewport, and they are unaffected by changes to the size of the visual viewport. Therefore, I will focus on the layout viewport in this article. For more information about the visual viewport, see the widely supported Visual Viewport API.

The Two Types Of Zoom

The two types of zoom are defined in the CSSOM View module:

“There are two kinds of zoom: page zoom, which affects the size of the initial viewport, and the visual viewport scale factor, which acts like a magnifying glass and does not affect the initial viewport or actual viewport.”

Page zoom is available in desktop browsers, where it can be found in the browser’s menu under the names “Zoom in” and “Zoom out” or just “Zoom”. When the page is “zoomed in,” the size of the layout viewport shrinks, which causes the page to reflow. If the page uses CSS media queries to adapt to different viewport widths (i.e., responsive web design), those media query breakpoints will be triggered by page zoom.

Scale-factor zoom is available on all platforms. It is most commonly performed with a pinch-to-zoom gesture on the device’s touch screen (e.g., smartphone, tablet) or touchpad (e.g., laptop). As I mentioned in the previous section, the size of the layout viewport does not change when zooming into a part of the page, so the page does not reflow.

Page Zoom Visual Viewport Scale Factor
Available on Desktop platforms All platforms
Activated by Keyboard command, menu option Pinch-to-zoom or double-tap gesture
Resizes Both layout and visual viewport Only visual viewport
Does it cause reflow? Yes No
The Layout Viewport And The Initial Containing Block

The layout viewport is the “containing block” for fixed-positioned elements. In other words, fixed-positioned elements are positioned and sized relative to the layout viewport. For this reason, the layout viewport can be viewed as the “position fixed viewport,” which may even be a better name for it.

/* this element completely covers the layout viewport */
.elem {
  position: fixed;
  top: 0;
  bottom: 0;
  left: 0;
  right: 0;
}

Here’s a tip for you: Instead of top: 0, bottom: 0, left: 0, and right: 0 in the snippet above, we can write inset: 0. The inset property is a shorthand property for the top, bottom, left, and right properties, and it has been supported in all major browsers since April 2021.

The initial containing block (ICB) is a rectangle that is positioned at the top of the web page. The ICB has a static size, which is the “small viewport size.” When a web page initially loads in the browser, the layout viewport and the ICB have the exact same size and position. The two rectangles diverge only when the user scrolls the page: The ICB scrolls out of view, while the layout viewport remains in view and, in the case of mobile browsers, grows to the “large viewport size.”

The ICB is the default containing block for absolutely positioned elements. In other words, absolutely positioned elements are, by default, positioned and sized relative to the ICB. Since the ICB is positioned at the top of the page and scrolls out of view, so do absolutely-positioned elements.

/* this element completely covers the ICB by default */
.elem {
  position: absolute;
  top: 0;
  left: 0;
  right: 0;
  bottom: 0;
}

The ICB is also the containing block for the <html> element itself, which is the root element of the web page. Since the ICB and the layout viewport initially have the same size (the “small viewport size”), authors can make the <body> element as tall as the initial viewport by setting height to 100% on both the <html> and <body> element.

/* By default: ICB height = initial viewport height */

/* <html> height = ICB height */
html {
  height: 100%;
}

/* <body> height = <html> height */
body {
  margin: 0;
  height: 100%;
}

/* Result: <body> height = initial viewport height */

Some websites, such as Google Search, use this method to position the page footer at the bottom of the initial viewport. Setting height to 100% is necessary because, by default, the <html> and <body> elements are only as tall as the content on the page.

Layout Viewport Initial Containing Block
Containing block for position: fixed elements position: absolute elements
Is it visible? Always in view (at least partially2). Scrolls out of view (positioned at the top of the page).
Size Small or large viewport size (depending on the browser UI) Small viewport size

2 The layout viewport is not fully visible when the user zooms in on the part of the page and when the browser’s virtual keyboard is shown.

The New Viewport Units

The CSS viewport units are specified in the CSS Values and Units Module, which is currently at Level 4. The original six viewport units shipped in browsers a decade ago. The new units shipped in major browsers over the past year, starting with Safari 15.4 in May 2022 and ending with Samsung Internet 21 in May 2023.

Note: The new viewport units may not be correctly implemented in some mobile browsers.

Layout Viewport Original Units (2013) New Unit Equivalent (2022)
Width vw svw, lvw, dvw
Height vh svh, lvh, dvh
Inline Size vi svi, lvi, dvi
Block Size vb svb, lvb, dvb
Smaller Size vmin svmin, lvmin, dvmin
Larger Size vmax svmax, lvmax, dvmax

A few clarifications:

  • The “inline size” and “block size” are either the width or the height, depending on the writing direction. For example, in writing systems with a left-to-right writing direction, the inline size is the width (vi is equivalent to vw), and the block size is the height (vb is equivalent to vh).
  • The ”smaller size” and “larger size” are either the width or the height, depending on which one is larger. For example, if the viewport is rather tall than it is wide (e.g., a smartphone in portrait mode), then the smaller size is the width (vmin is equivalent to vw), and the larger size is the height (vmax is equivalent to vh).
  • Each viewport unit is equal to one-hundredth of the corresponding viewport size. For example, 1vw is equal to one-hundredth of the viewport width, and 100vw is equal to the entire viewport width.

For each of the six original units, there are three new variants with the prefixes s, l, and d (small, large, and dynamic, respectively). This increases the total number of viewport units from 6 to 24.

  • The s-prefixed units represent the “small viewport size.”
    This means that 100svh is the height of the initial layout viewport when the browser’s UI is expanded.
  • The l-prefixed units represent the “large viewport size.”
    This means that 100lvh is the height of the layout viewport after the browser’s UI retracts. The height difference between the large and small viewport sizes is equivalent to the collapsible part of the browser’s UI:

100lvh - 100svh = how much the browser’s UI retracts

  • The old unprefixed units (i.e., vw, vh, and so on) are equivalent to the l-prefixed units in all browsers, which means that they also represent the “large viewport size.” For example, 100vh is equivalent to 100lvh.
  • The d-prefixed units represent the current size of the layout viewport, which can be either the “small viewport size” or the “large viewport size.” This means that 100dvh is the actual height of the layout viewport at any given point in time. This length changes whenever the browser’s UI retracts and expands.
Why Do We Have New CSS Units?

In previous years, the Android version of Chrome would resize the vh unit whenever the browser’s UI retracted and expanded as the user scrolled the page. In other words, vh behaved like dvh. But then, in February 2017, Chrome turned vh into a static length that is based on the “largest possible viewport”. In other words, vh started behaving like lvh. This change was made in part to match Safari’s behavior on iOS, which Apple implemented as a compromise:

“Dynamically updating the [100vh] height was not working. We had a few choices: drop viewport units on iOS, match the document size like before iOS 8, use the small view size, or use the large view size.

From the data we had, using the larger view size was the best compromise. Most websites using viewport units were looking great most of the time.”

With this change in place, the same problem that occurred in iOS Safari also started happening in Chrome. Namely, an element with height: 100vh, which is now the “large viewport size,” is taller than the initial viewport, which has the “small viewport size.” That means the bottom part of the element is not visible in the viewport when the web page initially loads. This prompted discussions about creating a solution that would allow authors to size elements based on the small viewport size. One of the suggestions was an environment variable, but the CSS Working Group ultimately decided to introduce a new set of viewport units.

/* make the hero section as tall as the initial viewport */
.hero {
  height: 100svh;
}

The same height can be achieved by setting height to 100% on the .hero element and all its ancestors, including the <body> and <html> elements, but the svh unit gives authors more flexibility.

I wasn’t able to find any good use cases for the dvh unit. It seems to me that sizing elements with dvh is not a good idea because it would cause constant layout shifts as the user scrolled the page. I had considered dvh for the following cases:

  • For fixed-positioned elements, such as modal dialogs and sidebars, height: 100% behaves the same as height: 100dvh because the containing block for fixed-positioned elements is the layout viewport, which already has a height of 100dvh. In other words, height: 100% works because 100% of 100dvh is 100dvh. This means that the dvh unit is not necessary to make fixed-positioned elements full-height in the dynamic viewport of mobile browsers.
  • For vertical scroll snapping, setting the individual “pages” to height: 100dvh results in a glitchy experience in mobile browsers. That being said, it is entirely possible that mobile browsers could fix this issue and make scroll snapping with height: 100dvh a smooth experience.

There is no concept of a “small viewport size” and a “large viewport size” in desktop browsers. All viewport units, new and old, represent the current size of the layout viewport, which means that all width units are equivalent to each other (i.e., vw= svw = lvw = dvw), and all height units are equivalent to each other (i.e., vh = svh = lvh = dvh). For example, if you replaced 100vh with 100svh in your code, nothing would change in desktop browsers.

This behavior isn’t exclusive to desktop platforms. It also occurs on mobile platforms in some cases, such as when a web page is embedded in an <iframe> element and when an installed web app opens in standalone mode.

It is possible for the small and large viewport sizes to be equivalent even during regular web browsing in mobile browsers. I have found two such cases:

  1. In Safari on iOS, if the user chooses the “Hide Toolbar” option from the page settings menu, the browser’s UI will retract and stay retracted while the user scrolls the page and navigates to other web pages.
  2. In Firefox on Android, if the user disables the “Scroll to hide toolbar” option in Settings → Customize, the browser’s UI will completely stop retracting when the user scrolls web pages.
The Two Types Of Scrollbars

In a web browser, scrollbars can be either classic or overlay. On mobile platforms, scrollbars are exclusively overlay. On desktop platforms, the user can choose the scrollbar type in the operating system’s settings. The classic scrollbar option is usually labeled “Always show scrollbars.” On Windows, scrollbars are classic by default. On macOS, scrollbars are overlay by default (since 2011), but they automatically switch to classic if the user connects a mouse.

The main difference between these two types of scrollbars is that classic scrollbars are placed in a separate ”scrollbar gutter” that consumes space when present, which reduces the size of the layout viewport; meanwhile, overlay scrollbars, as the name suggests, are laid over the web page without affecting the size of the layout viewport.

When a (vertical) classic scrollbar appears on the page in a desktop browser with classic scrollbars, the width of the layout viewport shrinks by the size of the scrollbar gutter, which is usually 15 to 17 CSS pixels. This causes the page to reflow. The size and position of absolutely and fixed-positioned elements may also change. By default, the browser only shows a classic scrollbar when the page overflows, but the page can force the scrollbar (or empty scrollbar track) to be shown and hidden via the CSS overflow property.

To prevent the page from reflowing whenever a vertical classic scrollbar is shown or hidden, authors can set scrollbar-gutter: stable on the <html> element. This declaration tells the browser to always reserve space for the classic scrollbar. The declaration has no effect in browsers with overlay scrollbars. The scrollbar-gutter property is not supported in Safari at the time of writing this article.

A benefit of classic scrollbars is that they make it clear when an element on the page has a scrollable overflow. In comparison, overlay scrollbars are not shown unless the user actually attempts to scroll an element that is a scroll container with overflow. This can be a problem because the user may not even notice that an element contains more content than initially visible. Chrome for Android mitigates this problem by showing the overlay scrollbar until the user scrolls the element at least once.

Even if the Windows operating system switches to overlay scrollbars by default in the future, some users prefer classic scrollbars and will turn them on if possible. Therefore, developers should test in browsers with classic scrollbars and ensure that their websites remain usable.

Issues Related To Classic Scrollbars

When testing your website in a desktop browser with classic scrollbars, the two main issues to look out for are unexpected extra scrollbars caused by small amounts of overflow and empty scrollbar tracks that serve no real purpose. These are usually not major issues, but they make the website appear not quite right, which may confuse or even annoy some visitors.

Issue 1: Setting overflow To scroll Instead Of auto

Whether or not a scroll container has overflow depends on the content length, viewport width, and other factors. In situations when there is no overflow, it’s usually better to hide the scrollbar than to show an empty scrollbar track in browsers with classic scrollbars. Such an automatic scrollbar behavior can be enabled by setting overflow to auto on the scroll container.

When a website is developed on macOS, which uses overlay scrollbars by default, the developer may mistakenly set overflow to scroll instead of auto. Overlay scrollbars behave in the same manner whether overflow is set to auto or scroll. The scrollbar only appears when the user attempts to scroll an element that is a scroll container with overflow. Classic scrollbars behave differently. Notably, if overflow is set to scroll but the element does not overflow, then the browser will show empty scrollbar tracks. To avoid this problem, set overflow to auto instead of scroll.

Auto scrollbars trade the problem of empty scrollbar tracks with the problem of content reflow, but the latter problem can be avoided by setting scrollbar-gutter to stable on the scroll container, as I previously mentioned.

Issue 2: Assuming That The Full Width Of A Media Query Is Available

CSS media queries don’t take into account the fact that classic scrollbars reduce the width of the viewport. In other words, media queries assume scrollbars never exist. For example, if the width of the layout viewport is 983 pixels, and the page has a vertical classic scrollbar that is 17 pixels wide, the media query (min-width: 1000px) is true because it “pretends” that the scrollbar isn’t there. And indeed, if we were to hide the scrollbar, the viewport width would grow to 1,000 pixels (983 + 17 = 1000).

@media (min-width: 1000px) {
/* 
  This does *not* mean that the viewport is
  ”at least 1000px wide”.

  The viewport width could be as low as 983px
  under normal circumstances.
*/
}

This behavior is by design. Media queries “assume scrollbars never exist” in order to prevent infinite loops. Web developers should not assume that the entire width of a media query is available on the web page. For example, avoid setting the width of the page to 1000px inside a @media (min-width: 1000px) rule.3

3 Apple does not seem to agree with this reasoning. In Safari, media queries take scrollbars into account, which means that the appearance and disappearance of a classic scrollbar can trigger media query breakpoints, although the browser seems to guard against infinite loops. Nonetheless, Safari’s behavior is considered a bug.

Issue 3: Using 100vw to make an element full width

The length 100vw is equivalent to the width of the layout viewport, except in one case. If the page has a vertical classic scrollbar, 100vw is larger than the viewport width. Due to this anomaly, setting an element to width: 100vw causes the page to overflow horizontally by a small amount in browsers with classic scrollbars.

This is a known issue. The CSS Values and Units module includes the following note:

“Issue: Level 3 assumes scrollbars never existed because it was hard to implement, and only Firefox bothered to do so. This is making authors unhappy. Can we improve here?”

The note is referring to Firefox’s past behavior, where the browser would reduce the size of 100vw on pages that set overflow to scroll on the <html> element. Such pages had a stable scrollbar track, and in that case, 100vw matched the actual viewport width. This behavior was removed from Firefox in 2017, and it was dropped from the CSS specification shortly after. In a recent change of heart, the CSS Working Group decided to revert their earlier decision and reinstate the behavior:

“RESOLVED: If overflow: scroll is set on the root element (not propagated from <body>), account for the default scrollbar width in the size of vw. Also, take scrollbar-gutter […] into account on the root.”

At the time of writing this article, this change has not yet made it into the CSS specification. It will take some time until all the major browsers ship the new behavior.

Solving The Classic Scrollbar Problem

As the title of this article states, the new viewport units did not solve the classic scrollbar problem. The new svw, dvw, and lvw units are equivalent to the original vw unit in browsers (i.e., 100svw = 100dvw = 100lvw = 100vw). At first glance, this may seem like a missed opportunity to solve the classic scrollbar problem with the new viewport units. For example, the length 100dvw could have represented the actual viewport width as it dynamically changes in response to the appearance and disappearance of a vertical classic scrollbar. This would have allowed developers to make any element on the page as wide as the viewport more easily.

There are at least two reasons why the new viewport units did not solve the classic scrollbar problem:

  1. The new viewport units were introduced to solve the problem of 100vh being taller than the initial viewport in mobile browsers. A small mobile viewport due to the browser’s expanded UI is different from a small desktop viewport due to the presence of classic scrollbars on the page, so the same s-prefixed viewport units cannot represent the small viewport in both cases. If they did, then, for example, using 100svh to solve a layout issue in mobile browsers would have potentially unwanted side effects in desktop browsers and vice-versa.
  2. The position of the CSS Working Group is that viewport units should be “resolvable at a computed-value time” and that they should “not depend on layout”. Implementing units that depend on a layout is “relatively hard” for browsers.

The CSS Working Group recently decided to mitigate the classic scrollbar problem by making 100vw smaller in browsers with classic scrollbars when the scrollbar-gutter property is set to stable on the <html> element. The idea is that when the page has a stable scrollbar gutter, the space for the scrollbar is reserved in advance, so the appearance of the scrollbar does not decrease the viewport width. In other words, the viewport has a static width that isn’t affected by the scrollbar. In that case, the length 100vw can safely match the viewport width at all times, whether the scrollbar is present or not. Once this behavior makes it into browsers, developers will be able to use scrollbar-gutter: stable to prevent width: 100vw from horizontally overflowing the page.

/* THIS BEHAVIOR HAS NOT YET SHIPPED IN BROWSERS */

/* On pages with a stable scrollbar gutter */
html {
  scrollbar-gutter: stable;
}

/* 100vw can be safely used */
.full-width {
  width: 100vw;
}
Avoiding The Classic Scrollbar Problem

Since at least 2018, developers have been using CSS custom properties that are dynamically updated via JavaScript to get the actual size of the viewport in CSS. In the following example, the custom property --vw is a dynamic version of the vw unit that is correctly updated when the viewport width changes due to the appearance or disappearance of a vertical classic scrollbar. The CSS variable falls back to 1vw when JavaScript fails to execute or doesn’t load at all.

.full-width {
  width: calc(var(--vw, 1vw) * 100);
}

In the JavaScript code, document.documentElement.clientWidth returns the width of the ICB, which is also the width of the layout viewport. Since the global resize event does not fire when a classic scrollbar changes the viewport width, I’m instead using a resize observer on the <html> element.

new ResizeObserver(() => {
  let vw = document.documentElement.clientWidth / 100;
  document.documentElement.style.setProperty('--vw', `${vw}px`);
}).observe(document.documentElement);

With the introduction of CSS container queries to browsers over the past year, another solution that doesn’t require JavaScript became available. By turning the <body> element into an inline-size query container, the length 100cqw — which is the width of <body> in this case — can be used instead of 100vw to get the desired result. Unlike 100vw, 100cqw becomes smaller when a vertical classic scrollbar appears on the page.

body {
  margin: 0;
  container-type: inline-size;
}

.full-width {
  width: 100vw; /* fallback for older browsers */
  width: 100cqw;
}

Container queries have been supported in all desktop browsers since February 2023. If the page has additional nested query containers, the <body> element’s width (100cqw) can be stored in a registered custom property to make it available inside those query containers. Registered custom properties are not supported in Firefox at the time of writing this article.

Conclusion

If you’d like to learn more about the concepts discussed in this article, I recommend the viewport investigation project, which was a collaboration between browser vendors to “research and improve the state of interoperability of existing viewport measurement features.” The new viewport units were, in fact, one of the focus areas of Interop 2022.

]]>
hello@smashingmagazine.com (Šime Vidas)
<![CDATA[Building Components For Consumption, Not Complexity (Part 2)]]> https://smashingmagazine.com/2023/12/building-components-consumption-not-complexity-part2/ https://smashingmagazine.com/2023/12/building-components-consumption-not-complexity-part2/ Mon, 18 Dec 2023 15:00:00 GMT Welcome back to my long read about building better components — components that are more likely to be found, understood, modified, and updated in ways that promote adoption rather than abandonment.

In the previous installment in the series, we took a good look through the process of building flexible and repeatable components, aligning with the FRAILS framework. In this second part, we will be jumping head first into building adoptable, indexable, logical, and specific components. We have many more words ahead of us.

Adoptable

According to Sparkbox’s 2022 design systems survey, the top three biggest challenges faced by teams were recently:

  1. Overcoming technical/creative debt,
  2. Parity between design & code,
  3. Adoption.

It’s safe to assume that points 1. and 2. are mostly due to tool limitations, siloed working arrangements, or poor organizational communication. There is no enterprise-ready design tool on the market that currently provides a robust enough code export for teams to automate the handover process. Neither have I ever met an engineering team that would adopt such a feature! Likewise, a tool won’t fix communication barriers or decades worth of forced silos between departments. This will likely change in the coming years, but I think that these points are an understandable constraint.

Point 3. is a concern, though. Is your brilliant design system adoptable? If we’re spending all this time working on design systems, why are people not using them effectively? Thinking through adoption challenges, I believe we can focus on three main points to make this process a lot smoother:

  1. Naming conventions,
  2. Community-building,
  3. (Over)communication.

Naming Conventions

There are too many ways to name components in our design tool, from camelCasing to kebab-casing, Slash/Naming/Conventions to the more descriptive, e.g., “Product Card — Cart”. Each approach has its pros and cons, but what we need to consider with our selection is how easy it is to find the component you need. Obvious, but this is central to any good name.

It’s tempting to map component naming 1:1 between design and code, but I personally don’t know whether this is what our goal should be. Designers and developers work in different ways and with different methods of searching for and implementing components, so we should cater to the audience. This would aid solutions based on intention, not blindly aiming for parity.

Figma can help bridge this gap with the “component description field” providing us a useful space to add additional, searchable names (or aliases, even) to every component. This means that if we call it a headerNavItemActive in code but a “Header link” in design with a toggled component property, the developer-friendly name can be added to the description field for searchable parity.

The same approach can be applied to styles as well.

There is a likelihood that your developers are working from a more tokenized set of semantic styles in code, whereas the design team may need less abstract styles for the ideation process. This delta can be tricky to navigate from a Figma perspective because we may end up in a world where we’re maintaining two or more sources of truth.

The advice here is to split the quick styles for ideation and semantic variables into different sets. The semantic styles can be applied at the component level, whereas the raw styles can be used for developing new ideas.

As an example, Brand/Primary may be used as the border color of an active menu item in your design files because searching “brand” and “primary” may be muscle memory and more familiar than a semantic token name. Within the component, though, we want to be aliasing that token to something more semantic. For example, border-active.

Note: Some teams go to a further component level with their naming conventions. For example, this may become header-nav-item-active. It’s hyper-specific, meaning that any use outside of this “Header link” example may not make sense for collaborators looking through the design file. Component-level tokens are an optional step in design systems. Be cautious, as introducing another layer to your token schema increases the amount of tokens you need to maintain.

This means if we’re working on a new idea — for example, we have a set of tabs in a settings page, and the border color for the active tab at the ideation stage might be using Brand/Primary as the fill — when this component is contributed back to the system, we will apply the correct semantic token for its usage, our border-active.

Do note that this advice is probably best suited to large design teams where your contribution process is lengthier and requires the distinct separation of ideation and production or where you work on a more fixed versioning release cycle for your system. For most teams, a single set of semantic variables will be all you need. Variables make this process a lot easier because we can manage the properties of these separate tokens in a central location. But! This isn’t an article about tokens, so let’s move on.

Community-building

A key pillar of a successful design system is advocacy across the PDE (product, design, and engineering) departments. We want people to be excited, not burdened by its rules. In order to get there, we need to build a community of internal design system advocates who champion the work being done and act as extensions of the central team. This may sound like unpaid support work, but I promise you it’s more than that.

Communicating constantly with designers taught me that with the popularity of design systems booming over the past few years, more and more of us are desperate to contribute to them. Have you ever seen a local component in a file that is remarkably similar to one that already exists? Maybe that designer wanted to scratch the itch of building something from the ground up. This is fine! We just need to encourage that more widely through a more open contribution model back to the central system.

How can the (central) systems team empower designers within the wider organization to build on top of the system foundations we create? What does that world look like for your team? This is commonly referred to as the “hub and spoke” model within design systems and can really help to accelerate interest in your system usage goals.

“There are numerous inflection points during the evolution of a design system. Many of those occur for the same fundamental reason — it is impossible to scale a design system team enough to directly support every demand from an enterprise-scale business. The design system team will always be a bottleneck unless a structure can be built that empowers business units and product teams to support themselves. The hub and spoke (sometimes also called ‘core + federated’) model is the solution.”

— Robin Cannon, “The hub and spoke design system model” (IBM)

In simple terms, a community can be anything as small as a shared Slack/Teams channel for the design system all the way up to fortnightly hangouts or learning sessions. What we do here is help to foster an environment where discussion and shared knowledge are at the center of the system rather than being tacked on after the components have been released.

The team at Zalando has developed a brilliant community within the design team for their system. This is in the form of a sophisticated web portal, frequent learning and educational meetings, and encouraging an “open house” mindset. Apart from the custom-built portal, I believe this approach is an easy-to-reach target for most teams, regardless of size. A starting point for this would be something as simple as an open monthly meeting or office hours, run by those managing your system, with invites sent out to all designers and cross-functional partners involved in production: product managers, developers, copywriters, product marketers, and the list goes on.

For those looking for inspiration on how to run semi-regular design systems events, take a look at what the Gov UK team have started over on Eventbrite. They have run a series of events ranging from accessibility deep dives all the way up to full “design system days.”

Leading with transparency is a solid technique for placing the design system as close as possible to those who use it. It can help to shift the mindset from being a siloed part of the design process to feeding all parts of the production pipeline for all key partners, regardless of whether you build it or use it.

Back to advocacy! As we roll out this transparent and communicative approach to the system, we are well-placed to identify key allies across the product, design, and engineering team/teams that can help steward excellence within their own reach. Is there a product manager who loves picking apart the documentation on the system? Let’s help to position them as a trusted resource for documentation best practices! Or a developer that always manages to catch incorrect spacing token usage? How can we enable them to help others develop this critical eye during the linting process?

This is the right place to mention Design Lint, a Figma plugin that I can only highly recommend. Design Lint will loop through layers you’ve selected to help you find possibly missing styles. When you write custom lint rules, you can check for errors like color styles being used in the wrong way, flag components that aren’t published to your library, mark components that don’t have a description, and more.

Each of these advocates for the system, spread across departments within the business, will help to ensure consistency and quality in the work being produced.

(Over)communication

Closely linked to advocacy is the importance of regular, informative, and actionable communication. Examples of the various types of communication we might send are:

  • Changelog/release notes.
  • Upcoming work.
  • System survey results. (Example: “Design Maturity Results, Sep-2023,” UK Department for Education.)
  • Resource sharing. Found something cool? Share it!
  • Hiring updates.
  • Small wins.

That’s a lot! This is a good thing, as it means there is always something to share among the team to keep people close, engaged, and excited about the system. If your partners are struggling to see how important and central a design system is to the success of a product, this list should help push that conversation in the right direction.

I recommend trying to build a pattern of regularity with your communication to firstly build the habit of sharing and, secondly, to introduce formality and weight to the updates. You might also want to decide whether you look forward or backward with the updates, meaning at the start or end of a sprint if you work that way.

Or perhaps you can follow a pattern as the following one:

  • Changelog/release notes are sent on the final day of every sprint.
  • “What’s next?” is shared at the start of a sprint.
  • Cool resources are shared mid-sprint to help inspire the team (and to provide a break between focus work sessions).
  • Small wins are shared quarterly.
  • Survey results are shared at the start of every second quarter.
  • Hiring updates are shared as they come up.

Outside of the system, communication really does make or break the success of a project, so leading from the front ensures we’re doing everything we can.

Indexable

The biggest issue when building or maintaining a system is knowing how your components will be used (or not used). Of course, we will never know until we try it out (btw, this is also the best piece of design advice I’ve ever been given!), but we need to start somewhere.

Design systems should prioritize quality over speed. But product teams often work in “ship at all costs” mode, prioritizing speed over quality.

“What do you do when a product team needs a UI component, pattern, or feature that the design system team cannot provide in time or is not part of their scope?”

— Josh Clark, “Ship Faster by Building Design Systems Slower

What this means is starting with real-world needs and problems. The likelihood when starting a system is that you will create all the form fields, then some navigational components, and maybe a few notification/alerts/callouts/notification components (more on naming conventions later) and then publish your library, hoping the team will use those components.

The harsh reality is, though, the following:

  • Your team members aren’t aware of which components exist.
  • They don’t know what components are called yet.
  • There is no immediate understanding of how components are translated into code.
  • You’re building components without needing them yet.

As you continue to sprint on your system, you will realize over time that more and more design work (user flows, feature work) is being pushed over to your product managers or developers without adhering to the wonderful design system you’ve been crafting. Why is that? It’s because people can’t discover your components! (Are they easily indexable?)

This is where the importance of education and communication comes into play. Whether it’s from design to development, design to copywriting, product to design, or brand to product, there is always a little bit more communication that can happen to ease these tensions within teams. Design Ops as a profession is growing in popularity amongst larger organizations for this very purpose — to better foster and facilitate communication channels not only amongst disparate design teams but also cross-functionally.

Note: Design Ops refers to the practice of integrating the design team’s workflow into the company’s broader development context. In practical terms, this means the design ops role is responsible for planning and managing the design team’s work and making sure that designers are collaborating effectively with product and engineering teams throughout the development process.

Back to discoverability! That communication layer could be introduced in a few ways, depending on how your team is structured. Using the channel within Slack or Teams (or whichever messaging tool you use) example from before, we can have a centralized communication channel about this very specific job — components.

Here’s an example message:

Within this channel, the person/s responsible for the system is encouraged to frequently post updates with as much context as is humanly possible.

For example:

  • What are you working on now?
  • What updates should we expect within the next day/week/month?
  • Who is working on what components?
  • How can the wider team support or contribute to this work?
  • Are there any blockers?

Starting with these questions and answers in a public forum will encourage wider communication and understanding around the system to ultimately force a wider adoption of what’s being worked on and when.

Secondly, within the tools themselves, we can be over-the-top communicative whilst we create. Making heavy use of the version history feature within Figma, we can add very intentional timestamps on activity, spelling out exactly what is happening, when, and by whom. Going into the weeds here to effectively use that section of the file as mini-documentation can allow your collaborators (even those without a paid license!) to get as close to the work as possible.

Additionally, if you are using a branch-based workflow for component management, we encourage you to use the branch descriptions as a way to achieve a similar result.

Note: If you are investigating a branch workflow within a large design organization, I recommend using them for smaller fixes or updates and for larger “major” releases to create new files. This will allow for a future world where one set of designers needs to work on v1, whereas others use v2.

Naming Conventions

Undoubtedly, the hardest part of design system work is naming things. What I call a dropdown, you may call a select, and someone else may call an option list. This makes it extremely difficult to align an entire team and encourage one way of naming anything.

However, there are techniques we can employ to ensure that we’re serving the largest number of users of our system as possible. Whether it’s using Figma features or working closer with our development team, there is a world in which people can find the components they need and when they need them.

I’m personally a big fan of prioritizing discoverability over complexity at every stage of design, from how we name our components to frames to entire files. What this means is that, more often than not, we’re better off introducing verbosity, rather than trying to make everything as concise as possible.

This is probably best served with an example!

What would you call this component?

  • Dropdown.
  • Popover.
  • Actions.
  • Modal.
  • Something else?

Of course, context is very important when naming anything, which is why the task is so hard. We are currently unaware of how this component will be used, so let’s introduce a little bit of context to the situation.

Has your answer changed? The way I look at this component is that, although the structure is quite generic — rounded card, inner list with icons — the usage is very specific. This is to be used on a search filter to provide the user with a set of actions that they can carry out on the results. You may:

  1. Import a predefined search query.
  2. Export your existing search query.
  3. Share your search query.

For this reason, why would we not call this something like search actions? This is a simplistic example (and doesn’t account for the many other areas of the product that this component could be used), but maybe that’s okay. As we build and mature our system, we will always hit walls where one component needs to — or can be — used in many other places. It’s at this time that we make decisions about scalability, not before we have usage.

Other options for this specific component could be:

  • Action list.
  • Search dropdown.
  • Search / Popover.
  • Filter menu.

Logical

Have you ever been in a situation where you searched for a component in the Figma Assets panel and not been sure of its purpose? Or have you been unsure of the customization possible within its settings? We all have!

I tend to find that this is the result of us (as design systems maintainers) optimizing for creation and not usage. This is so important, so I’ll say it again:

We tend to optimize for the people building the system, not for the people using it.

The consumers/users of a system will always far outweigh the people managing it. They will also be further away from the decisions that went into making the component and the reasons behind why it is built the way it is.

Here are a few hypothetical questions worth thinking through:

  • Why is this component called a navbar, and not a tab-bar?
  • Why does it have four tabs by default and not three, like the production app?
  • There’s only one navbar in the assets list, but we support many products. Where are the others?
  • How do I use the dark mode version of this component?
  • I need a tablet version of the table component. Should I modify this one, or do we have an alternative version ready to be used?

These may seem like familiar questions to you. And if not, congratulations, you’re doing a great job!

Figma makes it easy to build complexity into components, arguably too easy. I’m sure you’ve found yourself in a situation where you create a component set with too many permutations or ended up in a world where the properties applied to a component turn the component properties panel into what I like to call “prop soup.”

A good design system should be logical (usable). To me, usability means:

  1. Speed of discovery, and
  2. Efficient implementation of components.

The speed of discovery and the efficient implementation of components can — brace yourself! — sometimes mean repetition. That very much goes against our goals of a don’t repeat yourself system and will horrify those of you who yearn for a world in which consolidation is a core design system principle but bear with me for a bit more.

The canvas is a place for ideation and flexibility and a place where we need to encourage the fostering of new ideas fast. What isn’t fast is a confused designer. As design system builders, we then need to work in a world where components are customizable but only after being understood. And what is not easily understandable is a component with an infinite number of customization options and a generic name. What is understandable is a compact, descriptive, and lightweight component.

Let’s take an example. Who doesn’t love… buttons? (I don’t, but this atomic example is the simplest way to communicate our problem.)

Here, we have one component variant button with:

  • Four intentions (primary, secondary, error, warning);
  • Two types (fill, stroke);
  • Three different sizes (large, medium, small);
  • And four states (default, hover, focus, inactive).

Even while listing those out, we can see a problem. The easy way to think this through is by asking yourself, “Is a designer likely to need all of these options when it comes to usage?”

With this example, it might look like the following question: “Will a designer ever need to switch between a primary button and a warning one?” Or are they actually two separate use cases and, therefore two separate components?

To probably no one’s surprise, my preference is to split that component right down into its intended usage. That would then mean we have one variant for each component type:

  1. Primary,
  2. Secondary,
  3. Error (Destructive),
  4. Warning.

Four components for one button! Yes, that’s right, and there are two huge benefits if you decide to go this way:

  1. The Assets panel becomes easier to navigate, with each primary variant within each set being visually surfaced.
  2. The designer removes one decision from component usage: what type to use.

Let’s help set our (design) teams up for success by removing decisions! The design was intentionally placed within brackets there because, as you’re probably rightly thinking, we lose parity with our coded components here. You know what? I think that’s totally fine. Documentation and component handover happen once with every component, and it doesn’t mean we need to sacrifice usability within the design to satisfy the front-end framework composability. Documentation is still a vital part of a design system, and we can communicate component permutations in a method that meets design and development in the middle.

Auto Layout

Component usability is also heavily informed by the decision to use auto layout or not. It can be hard to grapple with, but my advice here is to go all in on using auto layout. Not only does it help to remove the need for eyeballing measurements within production designs, but it also helps remove the burden of spacing for non-design partners. If your copywriter needs to edit a line of text within a component, they can feel comfortable doing so with the knowledge that the surrounding content will flow and not “break” the design.

Note: Using padding and gap variables within main components can remove the “Is the spacing correct?” question from component composition.

Auto layout also provides us with some guardrails with regard to spacing and margins. We strive for consistency within systems, and using auto layout everywhere pushes us as far as possible in that direction.

Specific

We touched on this in the “usable” section, but naming conventions are so important for ensuring the discoverability and adoption of components within a system.

The more specific we can make components, the more likely they are to be used in the right place. Again, this may mean introducing inefficiencies within the system, but I strongly believe that efficiency is a long-term play and something we reach gradually over time. This means being incredibly inefficient in the short term and being okay with that!

Specific to me means calling a header a header, a filter a filter, and a search field a search field. Doesn’t it seem obvious? You’re right. It seems obvious, but if my Twitter “name that component” game has taught me anything, it’s that naming components is hard.

Let’s take our search field example.

  • Apple’s Human Interface Guidelines call it a “search field.”
  • Material Design calls it a “search bar.”
  • Microsoft Fluent 2 doesn’t have a search field. Instead, it has a “combobox” component with a typeahead search function.

Sure, the intentions may be different between a combobox and a search field or a search bar, but does your designer or developer know about these subtle nuances? Are they aware of the different use cases when searching for a component to use? Specificity here is the sharpest way for us to remove these questions and ensure efficiency within the system.

As I said before, this may mean that we end up performing inefficient activities within the system. For example, instead of bundling combobox and search into one component set with toggle-able settings, we should split them. This means searching for “search” in Figma would provide us with the only component we need, rather than having to think ahead if our combobox component can be customized to our needs (or not).

Conclusion

It was a long journey! I hope that throughout the past ten thousand words or so, you’ve managed to extract quite a few useful bits of information and advice, and you can now tackle your design systems within Figma in a way that increases the likelihood of adoption. As we know, this is right up there with the priorities of most design systems teams, and I firmly believe that following the principles laid out in this article will help you (as maintainers) sprint towards a path of more funding, more refined components, and happier team members.

And should you need some help or if you have questions, ask me in the comments below, or ping me on Twitter/Posts/Mastodon, and I’ll be more than happy to reply.

Further Reading

  • Driving change with design systems and process,” Matt Gottschalk and Aletheia Délivré (Config 2023)
    The conference talk explores in detail how small design teams can use design systems and design operations to help designers have the right environment for them.
  • Gestalt 2023 — Q2 newsletter
    In this article article, you will learn about the design systems roadmaps (from the Pinterest team).
  • Awesome Design Tokens
    A project that hosts a large collection of design token-related articles and links, such as GitHub repositories, articles, tools, Figma and Sketch plugins, and many other resources.
  • The Ondark Virus(D’Amato Design blog)
    An important article about naming conventions within design tokens.
  • API?(RedHat Help)
    This article will explain in detail how APIs (Application Programming Interface) work, what the SOAP vs. REST protocols are, and more.
  • Responsive Web Design,” by Ethan Marcotte (A List Apart)
    This is an old (but gold) article that set the de-facto standards in responsive web design (RWD).
  • Simple design system structure” (FigJam file, by Luis OuriachCC-BY license)
    For when you need to get started!
  • Fixed aspect ratio images with variants” (Figma file, by Luis OuriachCC-BY license)
    Aspect ratios are hard with image fills, so the trick to making them work is to define your breakpoints and create variants for each image. As the image dimensions are fixed, you will have much more flexibility — you can drag the components into your designs and use auto layout.
  • Mitosis
    Write components once, run everywhere; compiles to React, Vue, Qwik, Solid, Angular, Svelte, and others.
  • Create reusable components with Mitosis and Builder.io,” by Alex Merced
    A tutorial about Mitosis, a powerful tool that can compile code to standard JavaScript in addition to frameworks and libraries like Angular, React, and Vue, allowing you to create reusable components.
  • VueJS — Component Slots(Vue documentation)
    Components can accept properties (which can be JavaScript values of any type), but how about template content?
  • Magic Numbers in CSS,” by Chris Coyier (CSS Tricks)
    In CSS, magic numbers refer to values that work under some circumstances but are frail and prone to break when those circumstances change. The article will take a look at some examples so that you know what they are and how to avoid the issues related to their use.
  • Figma component properties(Figma, YouTube)
    In this quick video tip, you’ll learn what component properties are and how to create them.
  • Create and manage component properties(Figma Help)
    New to component properties? Learn how component properties work by exploring the different types, preferred values, and exposed nested instances.
  • Using auto layout(Figma Help)
    Master auto layout by exploring its properties, including resizing, direction, absolute position, and a few others.
  • Add descriptions to styles, components, and variables(Figma Help)
    There are a few ways to incorporate design system documentation in your Figma libraries. You can give styles, components, and variables meaningful names; you can add short descriptions to styles, components, and variables; you can add links to external documentation to components; and you can add descriptions to library updates.
  • Design system components, recipes, and snowflakes,” by Brad Frost
    Creating things with a component-based mindset right from the start saves countless hours. Everything is/should be a component!
  • What is digital asset management?(IBM)
    A digital asset management solution provides a systematic approach to efficiently storing, organizing, managing, retrieving, and distributing an organization’s digital assets.
  • Search fields (Components)(Apple Developer)
    A search field lets people search a collection of content for specific terms they enter.
  • Search — Components Overview(Material Design 3)
    Search lets people enter a keyword or phrase to get relevant information.
  • Combobox — Components(Fluent 2)
    A combobox lets people choose one or more options from a list or enter text in a connected input; entering text will filter options or allow someone to submit a free-form answer.
  • Pharos: JSTOR’s design system serving the intellectually curious(JSTOR)
    Building a design system from the ground up — a detailed account written by the JSTOR team.
  • Design systems are everybody’s business,” by Alex Nicholls (Director of Design at Workday)
    This is Part 1 in a three-part series that takes a deep dive into Workday’s experience of developing and releasing their design system out into the open. For the next parts, check Part II, “Productizing your design system,” and Part III, “The case for an open design system.”
  • Design maturity results ‘23,” (UK Dept. for Education)
    The results of the design maturity survey carried out in the Department for Education (UK), September 2023.
  • Design Guidance and Standards,” (UK Dept. for Education)
    Design principles, guidance, and standards to support people who use the Department for Education services (UK).
  • Sparkbox’s Design Systems Survey, 2022 (5th edition)
    The top three biggest challenges faced by design teams: are overcoming technical/creative debt, parity between design & code, and adoption. This article reviews in detail the survey results; 183 respondents maintaining design systems have responded.
  • The hub and spoke design system model,” by Robin Cannon (IBM)
    No design system team can scale enough to support an enterprise-scale business by itself. This article sheds some light on IBM’s hub and spoke model.
  • Building a design system around collaboration, not components(Figma, YouTube)
    It’s easy to focus your design system on the perfect component, missing out on the aspect that’ll ensure your success — collaboration. Louise From and Julia Belling (from Zalando) explain how they created and then scaled effectively their internal design system.
  • Friends of Figma, DesignOps(YouTube interest group)
    This group is about practices and resources that will help your design organization to grow. The core topics are centered around the standardization of design, design growth, design culture, knowledge management, and processes.
  • Linting meets Design,” by Konstantin Demblin (George Labs)
    The author is convinced that the concept of “design linting” (in Sketch) is groundbreaking for digital design and will remain state-of-the-art for a long time.
  • How to set up custom design linting in Figma using the Design Lint plugin,” by Daniel Destefanis (Product Design Manager at Discord)
    This is an article about Design Lint — a Figma plugin that loops through layers you’ve selected to help you find missing styles. You can check for errors such as color styles being used in the wrong way, flag components that aren’t published to your library, mark components that don’t have a description, and so on.
  • Design Systems and Speed,” by Brad Frost
    In this Twitter thread, Brad discusses the seemingly paradoxical relationship between design systems and speed. Design systems make the product work faster. At the same time, do design systems also need to go slower?
  • Ship Faster by Building Design Systems Slower,” by Josh Clark (Principal, Big Medium)
    Design systems should prioritize quality over speed, but product teams often have “ship at all costs” policies, prioritizing speed over quality. Actually, successful design systems move more slowly than the products they support, and the slower pace doesn’t mean that they have to be the bottleneck in the process.
  • Design Systems, a book by Alla Kholmatova (Smashing Magazine)
    Often, our design systems get out-of-date too quickly or just don’t get enough traction in our companies. What makes a design system effective? What works and what doesn’t work in real-life products? The book is aimed mainly at small to medium-sized product teams trying to integrate modular thinking into their organization’s culture. Visual and interaction designers, UX practitioners, and front-end developers particularly, will benefit from the knowledge in this book.
  • Making Your Collaboration Problems Go Away By Sharing Components,” by Shane Hudson (Smashing Magazine)
    Recently UXPin has extended its powerful Merge technology by adding npm integration, allowing designers to sync React component libraries without requiring any developer input.
  • Taking The Stress Out Of Design System Management,” by Masha Shaposhnikova (Smashing Magazine)
    In this article, the author goes over five tips that make it easier to manage a design system while increasing its effectiveness. This guide is aimed at smaller teams.
  • Around The Artifacts Of Design Systems (Case Study),” by Dan Donald (Smashing Magazine)
    Like many things, a design system isn’t ever a finished thing but a journey. How we go about that journey can affect the things we produce along the way. Before diving in and starting to plan anything out, be clear about where the benefits and the risks might lie.
  • Design Systems: Useful Examples and Resources,” by Cosima Mielke (Smashing Magazine)
    In complex projects, you’ll sooner or later get to the point where you start to think about setting up a design system. In this article, some interesting design systems and their features will be explored, as well as useful resources for building a successful design system.
]]>
hello@smashingmagazine.com (Luis Ouriach)
<![CDATA[CSS Scroll Snapping Aligned With Global Page Layout: A Full-Width Slider Case Study]]> https://smashingmagazine.com/2023/12/css-scroll-snapping-aligned-global-page-layout-case-study/ https://smashingmagazine.com/2023/12/css-scroll-snapping-aligned-global-page-layout-case-study/ Wed, 13 Dec 2023 10:00:00 GMT You know what’s perhaps the “cheapest” way to make a slider of images, right? You set up a container, drop a bunch of inline image elements in it, then set overflow-x: auto on it, allowing us to swipe through them. The same idea applies nicely to a group of cards, too.

But we’ll go deeper than scroll snapping. The thing with sliders is that it can be difficult to instruct them on where to “snap.” For example, what if we want to configure the slider in such a way that images always snap at the left (or inline-start) edge when swiping right to left?

But that’s not even the “tricky” part we’re looking at. Say we are working within an existing page layout where the main container of the page has a set amount of padding applied to it. In this case, the slider should always begin at the inline starting edge of the inside of the container, and when scrolling, each image should snap to the edge rather than scroll past it.

Simply drop the slider in the layout container, right? It’s not as straightforward as you might think. If you notice in the illustrations, the slider is outside the page’s main container because we need it to go full-width. We do that in order to allow the images to scroll fully edge-to-edge and overflow the main body.

Our challenge is to make sure the slider snaps into place consistent with the page layout’s spacing, indicated by the dashed blue lines in the drawings. The green area represents the page container’s padding, and we want images to snap right at the blue line.

The Basic Layout

Let’s start with some baseline HTML that includes a header and footer, each with an inner .container element that’s used for the page’s layout. Our slider will sit in between the header and footer but lack the same inner .container that applies padding and width to it so that the images scroll the full width of the page.

<header>
  <div class="container">
    <!-- some contained header with some nav items -->
  </div>
</header>
<main>
  <section class="slider">
    <!-- our slider -->
  </section>
  <section class="body-text">
    <div class="container">
      <!-- some contained text -->
    </div>
  </section>
</main>
<footer>
  <div class="container">
    <!-- a contained footer -->
  </div>
</footer>
Creating The Container

In contrast to the emphasis I’ve put on scroll snapping for this demo, the real power in creating the slider does not actually start with scroll snapping. The trick to create something like this starts with the layout .container elements inside the header and footer. We’ll set up a few CSS variables and configure the .container’s properties, such as its width and padding.

The following bit of CSS defines a set of variables that are used to control the maximum width and padding of a container element. The @media rules are used to apply different values to these properties depending on the viewport’s width.


:root {
  --c-max-width: 100%;
  --c-padding: 10px;

  @media screen and (min-width: 768px) {
    --c-max-width: 800px;
    --c-padding: 12px;
  }
  @media screen and (min-width: 1000px) {
    --c-max-width: 940px;
    --c-padding: 24px;
  }
  @media screen and (min-width: 1200px) {
    --c-max-width: 1200px;
    --c-padding: 40px;
  }
}

The first couple of lines of the :root element’s ruleset define two CSS custom properties: --c-max-width and --c-padding. These properties are used to control the layout .container’s maximum width and padding.

Next up, we have our @media rules. These apply different values to the --c-max-width and --c-padding properties depending on the screen size. For example, the first @media rule updates the value of --c-max-width from 100% to 800px, as well as the --c-padding from 10px to 12px when the screen width is at least 768px.

Those are the variables. We then set up the style rules for the container, which we’ve creatively named .container, and apply those variables to it. The .container’s maximum width and inline padding are assigned to the also creatively-named -c-max-width and --c-padding variables. This opens up our container’s variables at a root level so that they can easily be accessed by other elements when we need them.

I am using pixels in these examples because I want this tutorial to be about the actual technique instead of using different sizing units. Also, please note that I will be using CSS nesting for the demos, as it is supported in every major browser at the time I’m writing this.

The Scroll-Snapping

Let’s work on the scroll-snapping part of this slider. The first thing we’re going to do is update the HTML with the images. Remember that this slider is outside of the .container (we’ll take care of that later).

<header>
  <!-- .container -->
</header

<section class="slider">
  <div>
    <img src="..." alt="">
  </div>
  <div>
    <img src="..." alt="">
  </div>
  <div>
    <img src="..." alt="">
  </div>
  <!-- etc. -->
</section>

<footer>
  <!-- .container -->
</footer>

Now we have a a group of divs that are direct children of the .slider. And those, in turn, each contain one image element. With this intact, it’s time for us to style this as an actual slider. Flexbox is an efficient way to change the display behavior of the .slider’s divs so that they flow in the inline direction rather than stacking vertically as they naturally would as block-level elements. Using Flexbox also gives us access to the gap property to space things out a bit.

.slider {
  display: flex;
  gap: 24px;
}

Now we can let the images overflow the .slider in the horizontal, or inline, direction:

.slider {
  display: flex;
  gap: 24px;
  overflow-x: auto;
}

Before we apply scroll snapping, we ought to configure the divs so that the images are equally sized. A slider is so much better to use when the images are visually consistent rather than having a mix of portrait and landscape orientations, creating a jagged flow. We can use the flex property on the child divs, which is shorthand for the flex-shrink, flex-grow, and flex-basis properties:

.slider {
  display: flex;
  gap: 24px;
  overflow-x: auto;

  > * {
    flex: 0 0 300px;
  }
}

This way, the divs are only as big as the content they contain and will not exceed a width of 300px. But! In order to contain the images in the space, we will set them to take up the full 100% width of the divs, slap an aspect-ratio on them to maintain proportions, then use the object-fit property to to cover the div’s dimensions.

.slider {
  display: flex;
  gap: 24px;
  overflow-x: auto;

  > * {
    flex: 0 0 300px;
  }

  & img {
    aspect-ratio: 3 / 4;
    object-fit: cover;
    width: 100%;
  }
}

With this in place, we can now turn to scroll snapping:

.slider {
  display: flex;
  gap: 24px;
  overflow-x: auto;
  scroll-snap-type: x mandatory;

  > * {
    flex: 0 0 300px;
    scroll-snap-align: start;
  }

  /*
}

Here’s what’s up:

  • We’re using the scroll-snap-type property on the .slider container to initialize scroll snapping in the horizizontal (x) direction. The mandatory keyword means we’re forcing the slider to snap on items in the container instead of allowing it to scroll at will and land wherever it wants.
  • We’re using the scroll-snap-align property on the divs to set the snapping on the item’s start-ing edge (or “right” edge in a typical horizontal left-to-right writing mode).

Good so far? Here’s what we’ve made up to this point:

See the Pen Cheap Slider, Scroll Snapped [forked] by Geoff Graham.

Calculating The Offset Size

Now that we have all of our pieces in place, it’s time to create the exact snapping layout we want. We already know what the maximum width of the page’s layout .container is because we set it up to change at different breakpoints with the variables we registered at the beginning. In other words, the .container’s width will never exceed the value of --c-max-width. We also know the container always has a padding equal to the value of --c-padding.

Again, our slider is outside of the .container, and yet, we want the scroll-snapped images to align with those values for a balanced page layout. Let’s create a new CSS variable, but this time scoped to the .slider and set it up to calculate the space between the viewport and the inside of the .container element.

.slider {
  --offset-width: calc(((100% - (min(var(--c-max-width), 100%) + (var(--c-padding) * 2))) / 2) + (var(--c-padding) * 2)
  );
}

That is a lot of math! First, we’re calculating the minimum value of either the .container element’s max-width or 100%, whichever is smaller, then increasing this minimum value with padding on the .slider. This result is then subtracted from 100%. From this, we get the total amount of space that is available to offset either side of the .slider to align with the layout .container.

We then divide this number by 2 to get the offset width for each specific side. And finally, we add the .container’s inline padding to the offset width so that the .slider is offset from the inside edges of the container rather than the outside edges. In the demo, I have used the universal selector (*) and its pseudos to measure the box-sizing of all elements by the border-box so that we are working inside the .slider’s borders rather than outside of it.

*, *::before, *::after {
  box-sizing: border-box;
}
Some Minor Cleanup

If you think that our code is becoming a bit too chaotic, we can certainly improve it a bit. When I run into these situations, I sometimes like to organize things into multiple custom properties just for easy reading. For example, we could combine the inline paddings that are scoped to the :root and update the slider’s --offset-width variable with a calc() function that’s a bit easier on the eyes.

:root {
  /* previous container custom properties */

   --c-padding-inline: calc(var(--c-padding) * 2);
}

.slider {
  --offset-width: calc(((100% - (min(var(--c-max-width), 100%) + var(--c-padding-inline))) / 2) + var(--c-padding-inline));

  /* etc. */
}

That’s a smidge better, right?

Aligning The Slider With The Page Layout

We have a fully-functioning scroll scroll-snapping container at this point! The last thing for us to do is apply padding to it that aligns with the layout .container. As a reminder, the challenge is for us to respect the page layout’s padding even though the .slider is a full-width element outside of that container.

This means we need to apply our newly-created --offset-width variable to the .slider. We’ve already scoped the variable to the .slider, so all we really need is to apply it to the right properties. Here’s what that looks like:

.slider {
  --offset-width: calc(
    ((100% - (min(var(--c-max-width), 100%) + (var(--c-padding) * 2))) / 2) + (var(--c-padding) * 2)
  );

  padding-inline: var(--offset-width);
  scroll-padding-inline-start: var(--offset-width);

  /* etc. */
  }

The padding-inline and scroll-padding-inline-start properties are used to offset the slider from the left and right sides of its container and to ensure that the slider is always fully visible when the user scrolls.

  • padding-inline
    This sets spacing inside the .slider’s inline edges. A nice thing about using this logical property instead of a physical property is that we can apply the padding in both directions in one fell swoop, as there is no physical property shorthand that combines padding-left and padding-right. This way, the .slider’s internal inline spacing matches that of the .container in a single declaration.
  • scroll-padding-inline-start
    This sets the scroll padding at the start of the slider’s inline dimension. This scroll padding is equal to the amount of space that is added to the left (i.e., inline start) side of the .slider’s content during the scroll.

Now that the padding-inline and scroll-padding-inline-start properties are both set to the value of the --offset-width variable, we can ensure that the slider is perfectly aligned with the start of our container and snaps with the start of that container when the user scrolls.

We could take all of this a step further by setting the gap of our slider items to be the same as our padding gap. We’re really creating a flexible system here:

.slider {
  --gap: var(--c-padding);
  gap: var(--gap);
}

Personally, I would scope this into a new custom property of the slider itself, but it’s more of a personal preference. The full demo can be found on CodePen. I added a toggle in the demo so you can easily track the maximum width and paddings while resizing.

See the Pen Full width scroll snap that snaps to the container [forked] by utilitybend.

But we don’t have to stop here! We can do all sorts of calculations with our custom properties. Maybe instead of adding a fixed width to the .slider’s flex children, we want to always display three images at a time inside of the container:

.slider {
  --gap: var(--c-padding);
  --flex-width: calc((100% - var(--gap) * 2) / 3);

  /* Previous scroll snap code */

  > * {
    flex: 0 0 var(--flex-width);
    scroll-snap-align: start;
  }
}

That --flex-width custom property takes 100% of the container the slider is in and subtracts it by two times the --gap. And, because we want three items in view at a time, we divide that result by 3.

See the Pen Updated scroll container with 3 items fitted in container [forked] by utilitybend.

Why Techniques Like This Are Important

The best thing about using custom properties to handle calculations is that they are lighter and more performant than attempting to handle them in JavaScript. It takes some getting used to, but I believe that we should use these kinds of calculations a lot more often. Performance is such an important feature. Even seemingly minor optimizations like this can add up and really make a difference to the overall end-user experience.

And, as we’ve seen, we can plug in variables from other elements into the equation and use them to conform an element to the properties of another element. That’s exactly what we did to conform the .slider’s inner padding to the padding of a .container that is completely independent of the slider. That’s the power of CSS variables — reusability and modularity that can improve how elements interact within and outside other elements.

]]>
hello@smashingmagazine.com (Brecht De Ruyte)
<![CDATA[Building Components For Consumption, Not Complexity (Part 1)]]> https://smashingmagazine.com/2023/12/building-components-consumption-not-complexity-part1/ https://smashingmagazine.com/2023/12/building-components-consumption-not-complexity-part1/ Mon, 11 Dec 2023 21:00:00 GMT Design systems are on the tip of every designer’s tongue, but the narrative in the industry mainly focuses on why you need a design system and its importance rather than the reality of endless maintenance and internal politics. The truth is that design teams spend years creating these systems only to find out that few people adhere to the guidelines or stay within the “guardrails.”

I’ve been fortunate with Figma to host and run workshops at various conferences across Europe that center on one very specific aspect of product design: components.

I love components! They are the start, the end, the hair-tearing-out middle, the “building blocks,” and the foundation — the everything within every great product design organization.

The reason they are so important to me is because, firstly, who doesn’t like efficiency? Second, and more importantly, they are proven to increase the time-to-delivery from design to engineering, and here comes a buzzword — they offer a route to return on investment (ROI) within design teams. This is becoming increasingly important in a tough hiring market. So what’s not to love?

Note: You may be interested in watching Matt Gottschalk’s talk from Figma’s Config 2023 conference, which was dedicated to the topic of ROI within small design teams. The talk explored how small design teams can use design systems and design operations to help designers have the right environment for them to deliver better, more impactful results.

Like in most things, a solution to this is education and communication. If you aren’t comfortable with modifications on styles, you may want to set up your components in such a way as to indicate this. For example, using emojis in layers is the quickest way to say, “Hey, please don’t edit this!” or “You can edit this!”.

Or, consider shipping out components that are named for intention. Would separating components entirely (components named for intention) work better? An Input/Error, rather than a customizable Input?

When considering the emoji name approach, here’s a set that I’ve relied on in the past:

  1. Components
    • 🚨 Deprecated
    • 🟠 Not ready
  2. Component instances
    • 🔐️ Not editable
    • ✍️ Overwritten name
  3. Layers
    • ✍️ Editable
    • 🚨 Important
  4. Component properties
    • ◈ Type
    • ✍️ Edit text
    • 🔁️ Swap instance
    • 🔘 Toggle
    • ←️ Left
    • →️ Right
    • 🖱 Interaction
    • 📈 Data
    • ↔️ Size

Flexible: Responsive Design

Ahh, our old friend, responsive design (RWD)!

“The control which designers know in the print medium, and often desire in the web medium, is simply a function of the limitation of the printed page. We should embrace the fact that the web doesn’t have the same constraints and design for this flexibility. But first, we must accept the ebb and flow of things.”

— John Allsopp, “A Dao of Web Design

As it stands, there is no native solution within Figma to create fully responsive components. What I mean by “fully responsive” is that layout directions and contents change according to their breakpoint.

Here’s an example:

Note: It is technically possible to achieve this example now with auto layout wrapping and min/max widths on your elements, but this does not mean that you can build fully responsive components. Instead, you will likely end up in a magic numbers soup with a lengthy list of variables for your min and max widths!

With this limitation in mind, we may want to reconsider goals around responsive design within our component libraries. This may take the form of adapting their structure by introducing… more components! Do we want to be able to ship one component that changes from mobile all the way up to the desktop, or would it be easier to use, find, and customize separate components for each distinct breakpoint?

“Despite the fun sounding name, magic numbers are a bad thing. It is an old school term for ‘unnamed numerical constant,’ as in, just some numbers put into the code that are probably vital to things working correctly but are very difficult for anyone not intimately familiar with the code to understand what they are for. Magic numbers in CSS refer to values which ‘work’ under some circumstances but are frail and prone to break when those circumstances change.”

— Chris Coyier, “Magic Numbers in CSS

Never be hesitant to create more components if there is a likelihood that adoption will increase.

Then, what does this look like within Figma? We have a few options, but first, we need to ask ourselves a few questions:

  1. Does the team design for various screen sizes/dimensions? E.g., mobile and desktop web.
  2. Does the development team build for a specific platform/screen size(s)? E.g., an iOS team.
  3. Do you build apps aligning with native style guides? E.g., Material Design.

The answers to these questions will help us determine how we should structure our components and, more importantly, what our library structures will look like.

If the answer to questions 1. (Design for various screen sizes/dimensions?) and 2. (build for a specific platform/screen sizes?) is “No,” and to 3. (Build apps aligning with native style guides?) is “Yes,” to me, this means that we should split out components into separate component library files. We don’t want to enter into a world where an iOS component is accidentally added to a web design and pushed to production! This becomes increasingly common if we share component naming conventions across different platforms.

If the answer to question 3. (“Do we build native apps, using their design guidelines?”) is “Yes,” this definitely requires a separate component library for the platform-specific styles or components. You may want to investigate an option where you have a global set of styles and components used on every platform and then a more localized set for when designing on your native platforms.

The example below, with an example mapping of library files for an iOS design project, is inspired by my Figma community file (“Simple design system structure”), which I created to help you set up your design system more easily across different platforms.

Get “Simple design system structure” [FigJam file / Luis Ouriach, CC-BY license]

If you are designing across multiple platforms in a device-agnostic manner, you can bring components a lot closer together! If you aren’t currently working with an agnostic codebase, it might be worth checking Mitosis (“Write components once, run everywhere”).

A common challenge among development teams is using the same language; while one sub-team may be using Vue, another perhaps is using React, causing redundant work and forcing you to create shared components twice. In “Create reusable components with Mitosis and Builder.io,” Alex Merced explores in detail Mitosis, a free tool developed under the MIT license. Mitosis can compile code to standard JavaScript code in addition to frameworks and libraries such as Angular, React, and Vue, allowing you to create reusable components with more ease and speed.

Using A Variant

This may look like a variant set, with a specific property for device/size/platform added. For example,

As you can imagine, as those components increase in complexity (with different states added, for example), this can become unwieldy. Combining them into the same variant is useful for a smaller system, but across larger teams, I would recommend splitting them up.

Using Sections

You could consider grouping your components into different sections for each platform/device breakpoint. The approach would be the following:

  1. Use pages within Figma libraries to organize components.
  2. Within the pages, group each breakpoint into a section. This is titled by the breakpoint.
  3. Name the component by its semantic, discoverable name.

There is a caveat here! I’m sure you’re wondering: “But couldn’t variables handle these breakpoints, removing the need for different components?” The answer, as always, is that it’s down to your specific implementation and adoption of the system.

If your designer and developer colleagues are comfortable working within the variable workflow, you may be able to consolidate them! If not, we may be better served with many components.

Additionally, the split-component approach allows you to handle components in a structurally different manner across these different sizes — something that is not currently possible with variants.

Auto Layout

Regardless of how we organize the components, responsiveness can be pushed very far with the use of auto layout at every level of our screens. Although it can be intimidating at first, the auto layout makes components work similarly to how they would be structured in HTML and CSS, moving design and engineering teams closer together.

Let’s take a simple example: a generic input field. In your main component, you’re likely to have a text label within it with the text, e.g., “Label.” Generics are useful! It means that we can swap this content to be specific to our needs at an instance level.

Now, let’s say you insert this component into your design and swap that “Label” content for a label that reads “Email address.” This is our override; so far, so good.

However, if you then decide to change your main component structurally, you put that label at risk of losing its overrides. As an example of a structural change, your original “Placeholder” now becomes a “Label” above the input field. Instinctively, this may mean creating a new text element for the label. But! Should you try this, you are losing the mapping between your original text element and the new one.

This could potentially break your existing signed-off designs. Even though this seems like it could work — layer names are a great way to preserve overrides — they are separate elements, and Figma won’t know how to transfer that override to the new element.

At this point, introducing component properties can save us from this trouble. I’d recommend adding a text component property to all of your text layers in order to try to prevent any loss of data across the design files in which we are using the component.

As I showed before, I find adding a writing emoji (✍️) to the property name is a nice way to keep our component properties panel as scannable as possible.

Content Specificity

A decision then needs to be made about how specific the default content is within the component.

And this is where we should ask ourselves a question: do we need to change this content frequently? If the answer is yes, abstracting specific textual values from components means that they can be interpreted more widely. It’s a little bit of reverse psychology, but a text layer reading “[placeholder]” would prompt a designer to change it to their local use case.

If the answer is no, we will bake the fixed value we want into the component. Going back to our input field example, we might set the default label value to be “Email address” instead of “placeholder.” Or, we could create an entirely new email address component! (This is a call we’d need to make based on anticipated/recorded usage of the component.)

Imagery / Media

When setting up a content system within Figma, a few different questions immediately pop up:

  1. How do you use specific media for specific components?
  2. How do you fix aspect ratios for media?

Within Figma, an image is essentially a fill within a shape rather than its own content type, and this impacts how we manage that media. There are two ways to do this:

  1. Using styles.
  2. Using component sets (variants).

Before we look at styles and components, though, let’s take a look at the format that all assets within Figma could take.

Practically, I would advise setting up your media assets as their own library within Figma, potentially even setting up a few libraries if you work across various products/brands with different approaches to media.

For example, the imagery your product team uses within design files and marketing materials is likely to be very different, so we would look to set these up as different Figma libraries. A designer using those assets would toggle “on” the library they need to create an asset for a specific intention, keeping the right media in the right place.

Because this media is the same as any other style or component within Figma, we can use slash naming conventions to group types of media within the names.

Domain examples:

  • Company website,
  • Product,
  • Marketing,
  • Sub brand/s.

Media types:

  • Logo,
  • Icon,
  • Illustration,
  • Image,
  • Video.

Example names, using the format:

  • Figma.com/Logo/Figma,
  • Figma.com/Icon/Variable,
  • Figma.com/Illustration/Components,
  • Figma.com/Image/Office,
  • Designsystems.com/Logo/Stripe,
  • Designsystems.com/Icon/Hamburger,
  • Designsystems.com/Illustration/Orbs,
  • Designsystems.com/Image/Modular grid.

These are split into:

  • Library: Figma.com or Designsystems.com,
  • Media type: Illustration or Logo,
  • Media name: e.g., Component libraries, Iconography.

Although I’m using images for the examples here, it works with video assets, too! This means we can move in the direction of effectively using Figma like a mini DAM (digital asset manager) and iterate fast on designs using brand-approved media assets, rather than relying on dummy content.

“A digital asset management solution is a software solution that provides a systematic approach to efficiently storing, organizing, managing, retrieving, and distributing an organization’s digital assets. DAM functionality helps many organizations create a centralized place where they can access their media assets.”

— IBM, “What is digital asset management?

Using Fill Styles

Fill styles aren’t just for color! We can use them for images, videos, and even illustrations if we want to. It’s worth bearing in mind that because of the flexible nature of fills, you may want to consider working within fixed sizes or aspect ratios to ensure cropping is kept to a minimum.

Figma’s redline “snapping” feature lets us know when the original asset’s aspect ratio is being respected as we resize. It’s a pretty cool trick!

You can get the above example from Figma’s community:

Fixed aspect ratio images with variants” [Figma file / Luis Ouriach, CC-BY license]

For this world, though, I would advise against trying to hack Figma into setting up fully responsive images. Instead, I’d recommend working with a predefined fixed set of sizes in a component set. This may sound like a limitation, but I strongly believe that the more time we spend inside Figma, the further we get from the production environment. “Can we test this in the actual product?” is a question we should be asking ourselves frequently.

Practically, this looks like creating a component set where we set the fixed sizes along one dimension and the aspect ratio along the other. Creating a matrix like this means we can use the Component Properties panel to toggle between sizes and aspect ratios, preserving the media inside the component.

This can be used in tandem with a separate set of components specifically for images. If we combine this with Figma’s “nested instances” feature within variant components, we can “surface” all the preferred images from our component set within every instance at the aspect ratios needed!

Arrangement

This is the hardest thing to predict when we think through the usability of customizable components. The simplest example here is our old enemy: the form. Instinctively, we may create a complete form in a component library and publish it to the team. This makes sense!

The issue is that when a designer working on a particular project requires a rearrangement of that structure, we are kind of in trouble.

This problem extends to almost all component groups that require manipulation. Tables, menus, lists, forms, navigation… we will hit this wall frequently. This is where I’d like to introduce the concept of fixed vs flexible content within components, which should help to address the future problems of a world where we put the DRY (don’t repeat yourself) principles at risk.

As design system maintainers, we naturally want to keep components as composable as possible. How can this one component be used in lots of different ways without requiring us to ship an infinite number of variations? This is the central theme of the DRY principle but can be challenging in design tools because of the lack of component order management within the main components.

As a result, we often end up in a world where we build, maintain, and ship endless variations of the same component in an attempt to keep up with snowflake implementations of our core component.

“‘When should we make something a component?’ is a question I’ve been fielding for years. My strong answer: right from the start. Creating things with a component-based mindset right out the gate saves countless hours — everything is a component!”

— Brad Frost, “Design system components, recipes, and snowflakes

For example, the form we spoke about before could be one for:

  • Logging in;
  • Registering;
  • Signing up to the newsletter;
  • Adding billing information.

These are all clearly forms that require different data points and functionality from an engineering perspective, but they will most likely share common design foundations, e.g., padding, margins, headings, labels, and input field designs. The question then becomes, “How can we reduce repetition whilst also encouraging combinatorial design?”

A concept that has been long-used in the developer world and loosely agreed upon in the design community is termed “component slots.” This approach allows the design system maintainers to ship component containers with agreed properties — sizing, padding, and styles — whilst allowing for a flexible arrangement of components inside it.

Taking our previous form examples, we can then abstract the content — login form, register form, newsletter form, and billing information form — and provide a much simpler shell component from the library. The designers using this shell (let’s call it a “form/wrapper”) will then build their forms locally and replace the slot component inside the shell with this new custom main component.

This is best explained visually:

Does this custom component need to live in multiple files? If yes, we move it to the next level up, either team-level libraries or global, if working on a smaller system. If not, we can comfortably keep that component local to the specific Figma file on a page (I like to call it “❖ Components”).

Important: For this premise to really work, we must employ auto layout at every level, with no exceptions!

Conclusion

That was a lot to process (over five thousand words, actually), and I think it‘s time for us to stop staring at the computer screen and take a little break before walking through the next set of principles.

Go grab a drink or take some rest, then meet me in Part 2, where you will learn even more about the adoptable, indexable, logical, and specific components.

]]>
hello@smashingmagazine.com (Luis Ouriach)
<![CDATA[Preparing For Interaction To Next Paint, A New Web Core Vital]]> https://smashingmagazine.com/2023/12/preparing-interaction-next-paint-web-core-vital/ https://smashingmagazine.com/2023/12/preparing-interaction-next-paint-web-core-vital/ Thu, 07 Dec 2023 21:00:00 GMT This article is a sponsored by DebugBear

There’s a change coming to the Core Web Vitals lineup. If you’re reading this before March 2024 and fire up your favorite performance monitoring tool, you’re going to to get a Core Web Vitals report like this one pulled from PageSpeed Insights:

You’re likely used to seeing most of these metrics. But there’s a good reason for the little blue icon sitting next to the second metric in the second row, Interaction to Next Paint (INP). It’s the newest metric of the bunch and is set to formally be a ranking factor in Google search results beginning in March 2024.

And there’s a good reason that INP sits immediately below the First Input Delay (FID) in that chart. INP will officially replace FID when it becomes an official Core Web Vital metric.

The fact that INP is already available in performance reports means we have an opportunity to familiarize ourselves with it today, in advance of its release. That’s what this article is all about. Rather than pushing off INP until after it starts influencing the way we measure site performance, let’s take a few minutes to level up our understanding of what it is and why it’s designed to replace FID. This way, you’ll not only have the information you need to read your performance reports come March 2024 but can proactively prepare your website for the change.

“I’m Not Seeing Those Metrics In My Reports”

Chances are that you’re looking at Lighthouse or some other report based on lab data. And by that, I mean data that isn’t coming from the field in the form of “real” users. You configure the test by applying some form of simulated throttling and start watching the results pour in. In other words, the data is not looking at your actual web traffic but a simulated environment that gives you an approximate view of traffic when certain conditions are in place.

I say all that because it’s important to remember that not all performance data is equal, and some metrics are simply impossible to measure with certain types of data. INP and FID happen to be a couple of metrics where lab data is unsuitable for meaningful results, and that’s because both INP and FID are measurements of user interactions. That may not have been immediately obvious by the name “First Input Delay,” but it’s clear as day when we start talking about “Interaction to Next Paint” — it’s right there in the name!

Simulated lab data, like what is used in Lighthouse reports, does not interact with the page. That means there is no way for it to evaluate the first input a user makes or any other interactions on the page.

So, that’s why you’re not seeing INP or FID in your reports. If you want these metrics, then you will want to use a performance tool that is capable of using real user data, such as DebugBear, which can monitor your actual traffic on an ongoing basis in real time, or PageSpeed Insights which bases its finding on Google’s “Chrome User Experience Report” (commonly referred to as CrUX), though DebugBear is capable of providing CrUX reporting as well. The difference between real-time user monitoring and measuring performance against CrUX data is big enough that it’s worth reading up on it, and we have a full article on Smashing Magazine that goes deeply into the differences for you.

INP Improves How Page Interactions Are Measured

OK, so we now know that both INP and FID are about page interactions. Specifically, they are about measuring the time between a user interacting with the page and the page responding to that interaction.

What’s the difference between the two metrics, then? The answer is two-fold. First, FID is a measure of the time it takes the page to start processing an interaction or the input delay. That sounds fine on the surface — we want to know how much time it takes for a user to start an interaction and optimize it if we can. The problem with it, though, is that it takes just one part of the time for the page to fully respond to an interaction.

A more complete picture considers the input delay in addition to two other components: processing time and presentation delay. In other words, we should also look at the time it takes to process the interaction and the time it takes for the page to render the UI in response. As you may have already guessed, INP considers all three delays, whereas FID considers only the input delay.

The second difference between INP and FID is which interactions are evaluated. FID is not shy about which interaction it measures: the very first one, as in the input delay of the first interaction on the page. We can think of INP as a more complete and accurate representation of how fast your page responds to user interactions because it looks at every single one on the page. It’s probably rare for a page to have only one interaction, and whatever interactions there are after the first interaction are likely located well down the page and happen after the page has fully loaded.

So, where FID looks at the first interaction — and only the input delay of that interaction — INP considers the entire lifecycle of all interactions.

Measuring Interaction To Next Paint

Both FID and INP are measured in milliseconds. Don’t get too worried if you notice your INP time is greater than your FID. That’s bound to happen when all of the interactions on the page are evaluated instead of the first interaction alone.

Google’s guidance is to maintain an FID under 100ms. And remember, FID does not take into account the time it takes for the event to process, nor does it consider the time it takes the page to update following the event. It only looks at the delay of the event process.

And since INP does indeed take all three of those factors into account — the input delay, processing time, and presentation delay — Google’s guidance for measuring INP is inherently larger than FID: under 200ms for a “good” result, and between 200-500ms for a passing result. Any interaction that adds up to a delay greater than 500ms is a clear bottleneck.

The goal is to spot slow interactions and optimize them for a smoother user experience. How exactly do you identify those problems? That’s what we’re looking at next.

Identifying Slow Interactions

There’s already plenty you can do right now to optimize your site for INP before it becomes an official Core Web Vital in March 2024. Let’s walk through the process.

Of course, we’re talking about the user doing something on the page, i.e., an action such as a click or keyboard focus. That might be expanding a panel in an accordion component or perhaps triggering a modal or a prompt any change in a state where the UI updates in response.

Your page may consist of little more than content and images, making for very few, if any, interactions. It could just as well be some sort of game-based UI with thousands of interactions. INP can be a heckuva lot of work, but it really comes down to how many interactions we’re talking about.

We’ve already talked about the difference between field data and lab data and how lab data is simply unable to measure page interactions accurately. That means you will want to rely on field data when pulling INP reports to identify bottlenecks. And when we’re talking about field data, we’re talking about two different flavors:

  1. Data from the CrUX report that is based on the results of real Chrome users. This is readily available in PageSpeed Insights and Google Search Console, not to mention DebugBear. If you use either of Google’s tools, just note that their throttling methods collect metrics on a fast connection and then estimate how fast the page would be on a slower connection. DebugBear actually tests with a slower network, resulting in more accurate data.
  2. Monitoring your website’s real-time traffic, which will require adding a snippet to your source code that sends traffic data to a service. And, yes, DebugBear is one such service, though there are others. You can even take advantage of historical CrUX data integrated with BigQuery to get a historical view of your results dating back as far as 2017 with new data coming in monthly, which isn’t exactly “real-time” monitoring of your actual traffic, but certainly useful.

You will get the most bang for your buck with real-time monitoring that keeps a historical record of data you can use to evaluate INP results over time.

That said, you can still start identifying bottlenecks today if you prefer not to dive into real-time monitoring right this second. DebugBear has a tool that analyzes any URL your throw at it. What’s great about this is that it shows you the elements that receive user interaction and provides the results right next to them. The result of the element that takes the longest is your INP result. That’s true whether you have one component above the 500ms threshold or 100 of them on the page.

The fact that DebugBear’s tool highlights all of the interactions and organizes them by INP makes identifying bottlenecks a straightforward process.

See that? There’s a clear INP offender on Smashing Magazine’s homepage, and it comes in slightly outside the healthy INP range for a score of 510ms even though the next “slowest” result is 184ms. There’s a little work we need to do between now and March to remedy that.

Notice, too, that there are actually two scores in the report: the INP Debugger Result and the Real User Google Data. The results aren’t even close! If we were to go by the Google CrUX data, we’re looking at a result that is 201ms faster than the INP Debugger’s result — a big enough difference that would result in the Smashing Magazine homepage fully passing INP.

Ultimately, what matters is how real users experience your website, and you need to look at the CrUX data to see that. The elements identified by the INP Debugger may cause slow interactions, but if users only interact with them very rarely, that might not be a priority to fix. But for a perfect user experience, you would want both results to be in the green.

Optimizing Slow Interactions

This is the ultimate objective, right? Once we have identified slow interactions — whether through a quick test with CrUX data or a real-time monitoring solution — we need to optimize them so their delays are at least under 500ms, but ideally under 200ms.

Optimizing INP comes down to CPU activity at the end of the day. But as we now know, INP measures two additional components of interactions that FID does not for a total of three components: input delay, processing time, and presentation delay. Each one is an opportunity to optimize the interaction, so let’s break them down.

Reduce The Input Delay

This is what FID is solely concerned with, and it’s the time it takes between the user’s input, such as a click, and for the interaction to start.

This is where the Total Blocking Time (TBT) metric is a good one because it looks at CPU activity happening on the main thread, which adds time for the page to be able to respond to a user’s interaction. TBT does not count toward Google’s search rankings, but FID and INP do, and both are directly influenced by TBT. So, it’s a pretty big deal.

You will want to heavily audit what tasks are running on the main thread to improve your TBT and, as a result, your INP. Specifically, you want to watch for long tasks on the main thread, which are those that take more than 50ms to execute. You can get a decent visualization of tasks on the main thread in DevTools:

The bottom line: Optimize those long tasks! There are plenty of approaches you could take depending on your app. Not all scripts are equal in the sense that one may be executing a core feature while another is simply a nice-to-have. You’ll have to ask yourself:

  • Who is the script serving?
  • When is it served?
  • Where is it served from?
  • What is it serving?

Then, depending on your answers, you have plenty of options for how to optimize your long tasks:

Or, nuke any scripts that might no longer be needed!

Reduce Processing Time

Let’s say the user’s input triggers a heavy task, and you need to serve a bunch of JavaScript in response — heavy enough that you know a second or two is needed for the app to fully process the update.

Reduce Presentation Delay

Reducing the time it takes for the presentation is really about reducing the time it takes the browser to display updates to the UI, paint styles, and do all of the calculations needed to produce the layout.

Of course, this is entirely dependent on the complexity of the page. That said, there are a few things to consider to help decrease the gap between when an interaction’s callbacks have finished running and when the browser is able to paint the resulting visual changes.

One thing is being mindful of the overall size of the DOM. The bigger the DOM, the more HTML that needs to be processed. That’s generally true, at least, even though the relationship between DOM size and rendering isn’t exactly 1:1; the browser still needs to work harder to render a larger DOM on the initial page load and when there’s a change on the page. That link will take you to a deep explanation of what contributes to the DOM size, how to measure it, and approaches for reducing it. The gist, though, is trying to maintain a flat structure (i.e., limit the levels of nested elements). Additionally, reviewing your CSS for overly complex selectors is another piece of low-hanging fruit to help move things along.

While we’re talking about CSS, you might consider looking into the content-visibility property and how it could possibly help reduce presentation delay. It comes with a lot of considerations, but if used effectively, it can provide the browser with a hint as far as which elements to defer fully rendering. The idea is that we can render an element’s layout containment but skip the paint until other resources have loaded. Chris Coyier explains how and why that happens, and there are aspects of accessibility to bear in mind.

And remember, if you’re outputting HTML from JavaScript, that JavaScript will have to load in order for the HTML to render. That’s a potential cost that comes with many single-page application frameworks.

Gain Insight On Your Real User INP Breakdown

The tools we’ve looked at so far can help you look at specific interactions, especially when testing them on your own computer. But how close is that to what your actual visitors experience?

Real user-monitoring (RUM) lets you track how responsive your website is in the real world:

  • What pages have the slowest INP?
  • What INP components have the biggest impact in real life?
  • What page elements do users interact with most often?
  • How fast is the average interaction for a given element?
  • Is our website less responsive for users in different countries?
  • Are our INP scores getting better or worse over time?

There are many RUM solutions out there, and DebugBear RUM is one of them.

DebugBear also supports the proposed Long Animation Frames API that can help you identify the source code that’s responsible for CPU tasks in the browser.

Conclusion

When Interaction to Next Paint makes its official debut as a Core Web Vital in March 2024, we’re gaining a better way to measure a page’s responsiveness to user interactions that is set to replace the First Input Delay metric.

Rather than looking at the input delay of the first interaction on the page, we get a high-definition evaluation of the least responsive component on the page — including the input delay, processing time, and presentation delay — whether it’s the first interaction or another one located way down the page. In other words, INP is a clearer and more accurate way to measure the speed of user interactions.

Will your app be ready for the change in March 2024? You now have a roadmap to help optimize your user interactions and prepare ahead of time as well as all of the tools you need, including a quick, free option from the team over at DebugBear. This is the time to get a jump on the work; otherwise, you could find yourself with unidentified interactions that exceed the 500ms threshold for a “passing” INP score that negatively impacts your search engine rankings… and user experiences.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[Five-Second Testing: Taking A Closer Look At First Impressions (Case Study)]]> https://smashingmagazine.com/2023/12/five-second-testing-case-study/ https://smashingmagazine.com/2023/12/five-second-testing-case-study/ Wed, 06 Dec 2023 10:00:00 GMT In today’s world of shortening attention spans and omnipresent hustle, wasting even a second could mean losing the chance to earn more time from a person you want to impress. If your interests lie in creating good user experiences, there is a fair chance you have heard of five-second testing.

Five-second testing is an established technique of usability research used by UX researchers, designers, product managers, and in a variety of other professions, such as marketing or business analysis.

In short, you show a picture of whatever you are designing (site, app, pair of socks) to a member of your intended audience for exactly five seconds. Then, you hide the picture and ask the participant a couple of questions. The goal is to learn whether the reaction — the participants’s first impression — is what you wanted to see. Did you get the main message across? Do people remember the company’s name? Sounds like an efficient way to test your product without needing to turn to full-on usability testing, right?

Note: The word “participant” in this article is used to refer to users involved in five-second testing or related usability research methods. The word “user” is used in more general contexts since users form first impressions all the time, not just when you are testing it.

Why is it five seconds exactly, though? Are five seconds some magical moment when everything we see should become clear? And if it does not, does it automatically mean that a user experience is bad? Or are five seconds just the right amount of time for first impressions to brew in the user’s mind so that they’re neither undercooked nor overcooked?

These are some of the questions that we asked ourselves. Not satisfied with the answers written by others who covered the topic before us, we kept drilling and conducted an actual peer-reviewed scientific case study, exploring the hidden truths behind the testing of first impressions. The research paper examines the five-second test and discusses the results.

So strap in and read what science has to say about five-second testing. And then, what the implications are for you so that you can take practical advantage of this new knowledge to develop better first impressions of your services or products. But first, let us delve into what we know about five-second testing and its caveats so that you see the greater picture of the focal points of our investigation.

The Mythos Of Five Seconds And Its Gaps

You may be familiar with the well-known statistic that a website has about ten seconds to communicate its key message to the user. Knowing that waiting only five seconds to ask testing participants about their first impressions may suddenly seem like an odd choice. If indeed visitors of a website have about ten seconds to grasp a message, are five seconds really enough time for users? There is an alleged justification, as we explain below.

Cutting a bit forward, though, the factual basis for it is admittedly a bit of a Wild West if you look for hard data to support it. There is an almost uncanny resemblance to another not wholly scientific five-second rule that says it’s okay to eat food off the ground if it’s within five seconds from when it dropped there.

The five-second testing method has its origins as a simplification of usability testing. The first references to five-second testing point to Christine Perfetti, who coined the term for the method in the mid-2000s. The answer to “Why five seconds exactly?” comes largely from anecdotal evidence in the form of the experience of usability researchers.

The common story is that if something is shown to participants for more than five seconds, their first impressions will start to deviate from the actual user’s genuine initial impressions. The participant’s perspective becomes more analytical and less task-driven. The five-second test lets you avoid overtly speculative feedback that nobody would give you under normal circumstances.

Fair enough, that could potentially be true. But five seconds is still quite a short period of time. Consider how different people can be when it comes to their cognitive abilities (and there is nothing wrong with that). For example, one user’s sharp perception may let them realistically form first impressions in five seconds or faster, but another user may barely have the time to blink, much less absorb any meaningful information–they need a moment to take it in at their own tempo.

The reasoning starts to fall apart a bit more at its seams once you also consider the visual complexity of the stimulus (a.k.a., the picture you show to the participants). The nature of the things you may want to test can range from very simple to very complex. If the stimulus is simple, it is possible to take even less than five seconds for participants to form their initial impressions. Would this mean that there would be the risk of them using the remaining time to get over-exceedingly analytical?

Conversely, there is the question of whether five seconds is enough time to let participants realistically visually scan a more complex stimulus. I can already hear staunch proponents of five-second testing saying that this last discrepancy is actually rightfully intentional. It’s a feature, not a bug, if you will.

After all, if a stimulus is too complex, that is exactly why you conduct five-second testing. It allows us to find out about things like complexity. It can help you find out if participants cannot extract the key information you want to communicate so that you can fix it.

However, we need to consider that not all user interfaces are the landing pages of websites. They serve to support different user tasks, some of which cannot avoid having a certain degree of complexity.

Five-second testing guides typically avoid directly addressing testing of these types of user interfaces by saying that the method has the following limitation: it should not be used to test user interfaces with multiple purposes. If the same stimulus serves for more than one task, it is alleged that you should probably conduct full-fledged usability testing, which is technically correct (the best kind of correct).

Giving up on the five-second testing in the inherently more complex user interfaces, however, also gives up on its advantages for measuring and optimizing first impressions. For instance, the idea that a screenshot or a mockup is all you need to quickly find usability problems and iterate your designs. This is where five-second testing really shines.

Usability testing does not tell you accurately what the actual first impressions are without considerably interrupting the participant. And even then, you would encounter the same problem: At what moment from when the participant is exposed to a design should their first impressions be gauged so that they are genuine?

As we have discussed so far, there are certainly a fair number of question marks surrounding five-second testing. The method still undeniably has a number of merits, as proven by our experience at UXtweak, where we also provide our own Five Second Test tool. A lack of proper research on the topic is what drove us at UXtweak Research to conduct our very own case study.

The Science, Abridged

Essentially, what we sought to investigate in our case study are the relations of a number of key factors that are absolutely crucial for five-second testing:

  • What are the cognitive abilities of the participant engaging in the five-second test?
  • How visually complex is the stimulus shown to the participant?
  • For how long is the stimulus shown to the participant?
  • What kinds of questions do we ask the participants afterward?
  • What is the feedback that participants give you?

As you may have noticed, time — that iconic yet controversial five-second threshold — is considered a variable factor. In our experiment, we investigate the differences in feedback between three separate groups of participants who are shown pictures for either five (5) or alternatively two (2) or (10) seconds (so a bit less and a bit more time, respectively). This means that it would not be correct to refer to it as just a five-second test anymore, but rather an N-second test (or a first impression test, if you do not wish to be too pedantic about the number of seconds).

Each participant first passes not just one but two cognitive ability tests. Human minds are multifaceted, and there is not just a single “cognitive ability” metric that would encompass everything that the mind can do. Among standard tests used by psychologists, we picked two that are linked to abilities that can be found as the most relevant to the formation and testing of first impressions:

  • Perceptual speed: How quickly you pick up visual information.
  • Working memory: How much information you can mentally process at the same time.

Working memory is the appropriate memory ability to focus on since it operates with information that receives the user’s attention. This distinguishes it from sensory memory (the memory processing information that our senses pick up) and long-term memory, where information is stored persistently for later use.

For the first impression test itself, six website screenshots were used as the stimuli. These screenshots were selected for possessing a broad range of visual complexity, from the simplest with just a few visual elements to the most complex with a number of distinct sections that serve different purposes.

Screenshots of real websites local to Czechia and Slovakia were translated into English, and their logos were replaced with fictional brand names so that, for all intents and purposes, the website screenshots would be authentic yet also unfamiliar to the participants who were recruited in the UK.

Finally, participants were asked to provide feedback by answering practically a complete portfolio of the various types of questions that can be typically asked during a first impression test. Each type of question tests a different aspect of the first impressions that the participants have formed inside their heads:

  • Attitudinal questions: Rating a perceived quality of the website (e.g., ugly vs. attractive) on a scale from 1 to 7.
  • Target identification questions: Questions directed at specific elements or aspects of the stimulus.
  • Memory dump questions: Asking participants to describe everything that they remember about what they saw.

The resulting answers were analyzed both quantitatively (with statistics) and qualitatively (by inspecting the contents of the received answers on an individual level). With it, a number of conclusions can be reached, some expected and some rather surprising.

Now that you have a picture of what our case study was about let’s dive into the actual, interesting implications for developing the first impressions of your product.

Note: If you would like to immerse yourself in further details of how our case study was conducted, you can learn more in our scientific paper.

Takeaways

Statistically, all the variables we experimented with — the time duration of showing pictures to participants, the participants’ innate cognitive abilities, and the visual complexity of pictures — had a significant effect on the first impression answers.

For instance, between the groups that were shown screenshots of websites for two, five, and ten seconds, the number of answers that incorrectly identified what the websites were for dropped as time progressed. Notable is the difference between five and ten seconds. If the participants were really focusing on inconsequential details after five seconds, there should not be differences in recognition of such a key aspect as the website’s entire purpose.

Statistical differences lay the grounds for further observations on how changing the conditions of a test can (or cannot) affect its results:

  • Attitudes crystallize faster than in five seconds.
    In attitudinal questions where participants are asked to rate how they view the picture’s various qualities (e.g., from clear to confusing, from captivating to dull), answers stay relatively consistent, regardless of how much time the participant has or how good their cognitive abilities are. If you are laser-focused on assessing participants’ attitudes about your product and nothing else, you could present pictures for two seconds, or possibly even less, as research done by others on a related topic also implies.
  • Logos are recognized earlier than in five seconds (with one exception).
    The target identification questions where participants are asked to recall the company name from the logo are, on the whole, impacted by time very little. This is to be expected: when viewing a website, our eyes are usually drawn to the top left corner to find out where we have found ourselves. There is an exception to this rule, however.
    Among participants with slower perceptual speed, significantly fewer identified the company name correctly at two seconds when compared to five seconds. This establishes five seconds as a more inclusive choice for timing your first impression test if you expect your target audience to have, on average, lower perceptual speed than the general populace and if the primary aim is to test contents of the header, such as logo design or company name identification. Otherwise, two seconds is a safe bet.
  • Irrelevant nitpicking? Yes, if visual complexity is low.
    In some cases, the popular narrative about five seconds being a good viewing time for testing first impressions is indeed true. Particularly for the simplest website screenshots, once five seconds have elapsed, participants start paying attention to minute details (e.g., the girl’s shirt color in the hero image).
    Curiously, though, having more time does not mean that participants would write longer or more complex answers. Instead, when participants have ten seconds to view the screenshots, the higher visual complexity of the screenshots is reflected in better-quality answers. Participants stay more on-topic–describing how the site is visually structured or justifying their criticisms of the page’s design. Different viewing times may be optimal in different situations. Especially since…
  • Low working memory warrants longer viewing time.
    When asked to reiterate what they saw in their own words, participants with low and high working memory provided significantly different answers. With low working memory, answers become shorter, less complex, and recall fewer concepts overall. However, when the viewing time is extended to ten seconds, these differences disappear. This implies that the same information is being processed — memory capacity just dictates how fast it can happen.
    Without knowing where each participant’s memory ability stands, it is difficult to tell what they would actually recall if we left them to work at their own pace. Consequently, assessment of working memory before testing first impressions (and adjusting viewing time accordingly) should be considered a good practice.
  • For cognitive powerhouses, five seconds are enough.
    A less practical point maybe, but if you are developing an app for people with reasonably high perceptual speed and working memory — be it the mentally gifted, hyperproductive hustle enthusiasts, or caffeine addicts — you could likely show them your screenshots for just two seconds and get similar results as in a five-second test.
  • Give participants the proper amount of time to form a first impression.
    When the visual stimulus is more visually complex in a first impression test, the task of mentally processing it becomes more difficult and time-consuming (just like in any normal scenario). This manifests in test results. Fewer people correctly identify the purpose of a more visually complex website, and they recall fewer elements and aspects of the website.
    This could be seen as a bit of a paradox since more complex stimuli mean there is actually more content that participants could potentially remember and comment on, but only if they had the time to absorb the information properly. Data shows that when participants are given ten seconds, the answers do actually normalize, becoming more similar to stimuli of lower visual complexity.

If the purpose of the particular first impression test is not to remove all visual complexity at any cost outright, we would suggest adjusting the viewing time to reflect the visual complexity of the stimulus.

Keep in mind there are still aspects of first impression testing that remain unknown. A reasonable question that you can ask now would be: “Okay, so how exactly do I time my first impression test?” While we can sum up our observations into a conceptual framework of how time can be treated in first-impression test planning, it is not an exact guideline; there may be other interpretations or exceptions.

Take this more as an eye-opener and a call to action. Indeed, in our study, ten seconds yielded more appropriate results for more complex websites than five seconds did. But there is nothing to say that for other websites, the best timing could not be fifteen or even twenty seconds. Even more so, once you also factor in the influence of the cognitive ability of each individual participant.

The key takeaway? When you gauge your audience’s first impressions about something, take a more holistic approach.

Consider your goals for your test. What kind of questions do you want to ask? Use some of the tools that are available to measure the visual complexity of the pictures that you want to present. Give your participants a short working memory test before you start bombarding them with pictures and questions.

Try to adjust the timing in your first impression test to match the situation. To give an analogy, by blindly following a different five-second rule and eating off the floor, you could end up getting sick. Be just as cautious about relying on myths in your usability research methods. This is not to discount five seconds. As we show, it is still good timing for first impression tests in plenty of cases, but it is not the be-all and end-all as far as first impression testing goes. By broadening your perspective, you can do even better.

Resources

]]>
hello@smashingmagazine.com (Eduard Kuric)
<![CDATA[How Marketing Changed OOP In JavaScript]]> https://smashingmagazine.com/2023/12/marketing-changed-oop-javascript/ https://smashingmagazine.com/2023/12/marketing-changed-oop-javascript/ Mon, 04 Dec 2023 14:00:00 GMT Even though JavaScript’s name was coined from the Java language, the two languages are worlds apart. JavaScript has more in common with Lisp>) and Scheme>), sharing features such as first-class functions and lexical scoping.

JavaScript also borrows its prototypal inheritance from the Self>) language. This inheritance mechanism is perhaps what many — if not most — developers do not spend enough time to understand, mainly because it isn’t a requirement to start working with JavaScript. That characteristic can be seen as either a design flaw or a stroke of genius. That said, JavaScript’s prototypal nature was marketed and hidden behind a “Java for the web” mask. We’ll elaborate more on that as we go on.

JavaScript isn’t confident in its own prototypal nature, so it gives developers the tools to approach the language without ever having to touch a prototype. This was an attempt to be easily understood by every developer, especially those coming from class-based languages, such as Java, and would later become one of JavaScript’s biggest enemies for years to come: You don’t have to understand how JavaScript works to code in JavaScript.

What Is Classical Object-Oriented Programming?

Classical object-oriented programming (OOP) revolves around the concept of classes and instances and is widely used in languages like Java, C++, C#, and many others. A class is a blueprint or template for creating objects. It defines the structure and behavior of objects that belong to that class and encapsulates properties and methods. On the other hand, objects are instances of classes. When you create an object from a class, you’re essentially creating a specific instance that inherits the structure and behavior defined in the class while also giving each object an individual state.

OOP has many fundamental concepts, but we will focus on inheritance, a mechanism that allows one class to take on the properties and methods of another class. This facilitates code reuse and the creation of a hierarchy of classes.

What’s Prototypal OOP In JavaScript?

I will explain the concepts behind prototypal OOP in Javascript, but for an in-depth explanation of how prototypes work, MDN has an excellent overview on the topic.

Prototypal OOP differs from classical OOP, which is based on classes and instances. In prototypal OOP, there are no classes, only objects, and they are created directly from other objects.

If we create an object, it will have a built-in property called prototype that holds a reference to its “parent” object prototype so we can access its prototype’s methods and properties. This is what allows us to access methods like .sort() or .forEach() from any array since each array inherits methods from the Array.prototype object.

The prototype itself is an object, so the prototype will have its own prototype. This creates a chain of objects known as the prototype chain. When you access a property or method on an object, JavaScript will first look for it on the object itself. If it’s not found, it will traverse up the prototype chain until it finds the property or reaches the top-level object. It will often end in Object.prototype, which has a null prototype, denoting the end of the chain.

A crucial difference between classical and prototypal OOP is that we can’t dynamically manipulate a class definition once an object is created. But with JavaScript prototypes, we can add, delete, or change methods and properties from the prototype, affecting the objects down the chain.

“Objects inherit from objects. What could be more object-oriented than that?”

Douglas Crockford

What’s The Difference In JavaScript? Spoiler: None

So, on paper, the difference is simple. In classical OOP, we instantiate objects from a class, and a class can inherit methods and properties from another class. In prototypal OOP, objects can inherit properties and methods from other objects through their prototype.

However, in JavaScript, there is not a single difference beyond syntax. Can you spot the difference between the following two code excerpts?

// With Classes

class Dog {
  constructor(name, color) {
    this.name = name;

    this.color = color;
  }

  bark() {
    return I am a ${this.color} dog and my name is ${this.name}.;
  }
}

const myDog = new Dog("Charlie", "brown");

console.log(myDog.name); // Charlie

console.log(myDog.bark()); // I am a brown dog and my name is Charlie.
// With Prototypes

function Dog(name, color) {
  this.name = name;

  this.color = color;
}

Dog.prototype.bark = function () {
  return I am a ${this.color} dog and my name is ${this.name}.;
};

const myDog = new Dog("Charlie", "brown");

console.log(myDog.name); // Charlie

console.log(myDog.bark()); // I am a brown dog and my name is Charlie.

There is no difference, and JavaScript will execute the same code, but the latter example is honest about what JavaScript is doing under the hood, while the former hides it behind syntactic sugar.

Do I have a problem with the classical approach? Yes and no. An argument can be made that the classical syntax improves readability by having all the code related to the class inside a block scope. On the other hand, it’s misleading and has led thousands of developers to believe that JavaScript has true classes when a class in JavaScript is no different from any other function object.

My biggest issue isn’t pretending that true classes exist but rather that prototypes don’t.

Consider the following code:

class Dog {
  constructor(name, color) {
    this.name = name;

    this.color = color;
  }

  bark() {
    return I am a ${this.color} dog and my name is ${this.name}.;
  }
}

const myDog = new Dog("Charlie", "brown");

Dog.prototype.bark = function () {
  return "I am really just another object with a prototype!";
};

console.log(myDog.bark()); // I am really just another object with a prototype!"

Wait, did we just access the class prototype? Yes, because classes don’t exist! They are merely functions returning an object (called constructor functions), and, inevitably, they have a prototype which means we can access its .prototype property.

It almost looks like JavaScript tries to hide its prototypes. But why?

There Are Clues In JavaScript’s History

In May 1995, Netscape involved JavaScript creator Brendan Eich in a project to implement a scripting language into the Netscape browser. The main idea was to implement the Scheme language into the browser due to its minimal approach. The plan changed when Netscape closed a deal with Sun Microsystems, creators of Java, to implement Java on the web. Soon enough, Brendan Eich and Sun Microsystems founder Bill Joy saw the need for a new language. A language that was approachable for people whose main focus wasn’t only programming. A language both for a designer trying to make a website and for an experienced developer coming from Java.

With this goal in mind, JavaScript was created in 10 days of intense work under the early name of Mocha. It would be changed to LiveScript to market it as a script executing “live” in the browser but in December 1995, it would ultimately be named JavaScript to be marketed along with Java. This deal with Sun Microsystems forced Brendan to accommodate his prototype-based language to Java. According to Brendan Eich, JavaScript was treated as the “sidekick language to Java” and was greatly underfunded in comparison with the Java team:

“I was thinking the whole time, what should the language be like? Should it be easy to use? Might the syntax even be more like natural language? [...] Well, I’d like to do that, but my management said, “Make it look like Java.”

Eich’s idea for JavaScript was to implement Scheme first-class functions — a feature that would allow callbacks for user events — and OOP based on prototypes from Self. He’s expressed this before on his blog:

“I’m not proud, but I’m happy that I chose Scheme-ish first-class functions and Self-ish prototypes as the main ingredients.”

JavaScript’s prototypal nature stayed but would specifically be obscured behind a Java facade. Prototypes likely remained in place because Eich implemented Self prototypes from the beginning and they later couldn’t be changed, only hidden. We can find a mixed explanation in an old comment on his blog:

“It is ironic that JS could not have class in 1995 because it would have rivaled Java. It was constrained by both time and a sidekick role.”

Either way, JavaScript became a prototype-based language and the most popular one by far.

If Only JavaScript Embraced Its Prototypes

In the rush between the creation of JavaScript and its mass adoption, there were several other questionable design decisions surrounding prototypes. In his book, JavaScript: The Good Parts, Crockford explains the bad parts surrounding JavaScript, such as global variables and the misunderstanding around prototypes.

As you may have noticed, this article is inspired by Crockford’s book. Although I disagree with many of his opinions about JavaScript’s bad parts, it’s important to note the book was published in 2008 when ECMAScript 4 (ES4) was the stable version of JavaScript. Many years have passed since its publication, and JavaScript has significantly changed in that time. The following are features that I think could have been saved from the language if only JavaScript had embraced its prototypes.

The this Value In Different Contexts

The this keyword is another one of the things JavaScript added to look like Java. In Java, and classical OOP in general, this refers to the current instance on which the method or constructor is being invoked, just that. However, in JavaScript, we didn’t have class syntax until ES6 but still inherited the this keyword. My problem with this is it can be four different things depending on where is invoked!

1. this In The Function Invocation Pattern

When this is invoked inside a function call, it will be bound to the global object. It will also be bound to the global object if it’s invoked from the global scope.

console.log(this); // window

function myFunction() {
  console.log(this);
}

myFunction(); // window

In strict mode and through the function invocation pattern, this will be undefined.

function getThis() {
  "use strict";

  return this;
}

getThis(); // undefined

2. this In The Method Invocation Pattern

If we reference a function as an object’s property, this will be bound to its parent object.

const dog = {
  name: "Sparky",

  bark: function () {
    console.log(`Woof, my name is ${this.name}.`);
  },
};

dog.bark(); // Woof, my name is Sparky.

Arrow functions do not have their own this, but instead, they inherit this from their parent scope at creation.

const dog = {
  name: "Sparky",

  bark: () => {
    console.log(`Woof, my name is ${this.name}.`);
  },
};

dog.bark(); // Woof, my name is undefined.

In this case, this was bound to the global object instead of dog, hence this.name is undefined.

3. The Constructor Invocation Pattern

If we invoke a function with the new prefix, a new empty object will be created, and this will be bound to that object.

function Dog(name) {
  this.name = name;

  this.bark = function () {
    console.log(`Woof, my name is ${this.name}.`);
  };
}

const myDog = new Dog("Coco");

myDog.bark(); // Woof, my name is Coco.

We could also employ this from the function’s prototype to access the object’s properties, which could give us a more valid reason to use it.

function Dog(name) {
  this.name = name;
}

Dog.prototype.bark = function () {
  console.log(`Woof, my name is ${this.name}.`);
};

const myDog = new Dog("Coco");

myDog.bark(); // Woof, my name is Coco.

4. The apply Invocation Pattern

Lastly, each function inherits an apply method from the function prototype that takes two parameters. The first parameter is the value that will be bound to this inside the function, and the second is an array that will be used as the function parameters.

// Bounding `this` to another object

function bark() {
  console.log(`Woof, my name is ${this.name}.`);
}

const myDog = {
  name: "Milo",
};

bark.apply(myDog); // Woof, my name is Milo.

// Using the array parameter

const numbers = [3, 10, 4, 6, 9];

const max = Math.max.apply(null, numbers);

console.log(max); // 10

As you can see, this can be almost anything and shouldn’t be in JavaScript in the first place. Approaches like using bind() are solutions to a problem that shouldn’t even exist. Fortunately, this is completely avoidable in modern JavaScript, and you can save yourself several headaches if you learn how to dodge it; an advantage that ES6 class users can’t enjoy.

Crockford has a nice anecdote on the topic from his book:

“This is a demonstrative pronoun. Just having this in the language makes the language harder to talk about. It is like pair programming with Abbott and Costello.”

“But if we want to create a function constructor, we will need to use this.” Not necessarily! In the following example, we can make a function constructor that doesn’t use this or new to work.

function counterConstructor() {
  let counter = 0;

  function getCounter() {
    return counter;
  }

  function up() {
    counter += 1;

    return counter;
  }

  function down() {
    counter -= 1;

    return counter;
  }

  return {
    getCounter,

    up,

    down,
  };
}

const myCounter = counterConstructor();

myCounter.up(); // 1

myCounter.down(); // 0

We just created a function constructor without using this or new! And it comes with a straightforward syntax. A downside you could see is that objects created from counterConstructor won’t have access to its prototype, so we can’t add methods or properties from counterConstructor.prototype.

But do we need this? Of course, we will need to reuse our code, but there are better approaches that we will see later.

The new Prefix

In JavaScript: The Good Parts, Crockford argues that we shouldn’t use the new prefix simply because there is no guarantee that we will remember to use it in the intended functions. I think that it’s an easy-to-spot mistake and also avoidable by capitalizing the constructor functions you intend to use with new. And nowadays, linters will warn us when we call a capitalized function without new, or vice-versa.

A better argument is simply that using new forces us to use this inside our constructor functions or “classes,” and as we saw earlier, we are better off avoiding this in the first place.

The Multiple Ways To Access Prototypes

For the historical reasons we already reviewed, we can understand why JavaScript doesn’t embrace its prototypes. By extension, we don’t have tools to mingle with prototypes as straightforward as we would want, but rather devious attempts to manipulate the prototype chain. Things get worse when across documentation, we can read different jargon around prototypes.

The Difference Between [[Prototype]], __proto__, And .prototype

To make the reading experience more pleasant, let’s go over the differences between these terms.

  • [[Prototype]] is an internal property that holds a reference to the object’s prototype. It’s enclosed in double square brackets, which means it typically cannot be accessed using normal notation.
  • __proto__ can refer to two possible properties:
    • It can refer to a property from any Object.prototype object that exposes the hidden [[Prototype]] property. It’s deprecated and ill-performing.
    • It can refer to an optional property we can add when creating an object literal. The object’s prototype will point to the value we give it.
  • .prototype is a property exclusive to functions or classes (excluding arrow functions). When invoked using the new prefix, the instantiated object’s prototype will point to the function’s .prototype.

We can now see all the ways we can modify prototypes in JavaScript. After reviewing, we will notice they all fall short in at least some aspect.

Using The __proto__ Literal Property At Initialization

When creating a JavaScript object using object literals, we can add a __proto__ property. The created object will point its [[Prototoype]] to the value given in __proto__. In a prior example, objects created from our function constructor didn’t have access to the constructor prototype. We can use the __proto__ property at initialization to change this without using this or new.

function counterConstructor() {
  let counter = 0;

  function getCounter() {
    return counter;
  }

  function up() {
    counter += 1;

    return counter;
  }

  function down() {
    counter -= 1;

    return counter;
  }

  return {
    getCounter,

    up,

    down,

    __proto__: counterConstructor.prototype,
  };
}

The advantage of linking the new object’s prototype to the function constructor would be that we can extend its methods from the constructor prototype. But what good would it be if we needed to use this again?

const myCounter = counterConstructor();

counterConstructor.prototype.printDouble = function () {
  return this.getCounter() * 2;
};

myCounter.up(); // 1

myCounter.up(); // 2

myCounter.printDouble(); // 4

We didn’t even modify the count internal value but instead printed it double. So, a setter method would be necessary to manipulate its state from outside the initial function constructor declaration. However, we are over-complicating our code since we could have simply added a double method inside our function.

function counterConstructor() {
  let counter = 0;

  function getCounter() {
    return counter;
  }

  function up() {
    counter += 1;

    return counter;
  }

  function down() {
    counter -= 1;

    return counter;
  }

  function double() {
    counter = counter * 2;

    return counter;
  }

  return {
    getCounter,

    up,

    down,

    double,
  };
}

const myCounter = counterConstructor();

myCounter.up(); // 1

myCounter.up(); // 2

myCounter.double(); // 4

Using __proto__ is overkill in practice.

It’s vital to note that __proto__ must only be used when initializing a new object through an object literal. Using the __proto__ accessor in Object.prototype.__proto__ will change the object’s [[Prototoype]] after initialization, disrupting lots of optimizations done under the hood by JavaScript engines. That’s why Object.prototype.__proto__ is ill-performant and deprecated.

Object.create()

Object.create() returns a new object whose [[Prototype]] will be the first argument of the function. It also has a second argument that lets you define additional properties to the new objects. However, it’s more flexible and readable to create an object using an object literal. Hence, its only practical use would be to create an object without a prototype using Object.create(null) since all objects created using object literals are automatically linked to Object.prototype.

Object.setPrototypeOf()

Object.setPrototypeOf() takes two objects as arguments and will mutate the prototype chain from the former argument to the latter. As we saw earlier, switching an object’s prototype after initialization is ill-performing, so avoid it at all costs.

Encapsulation And Private Classes

My last argument against classes is the lack of privacy and encapsulation. Take, for example, the following class syntax:

class Cat {
  constructor(name) {
    this.name = name;
  }

  meow() {
    console.log(`Meow! My name is ${this.name}.`);
  }
}

const myCat = new Cat("Gala");

myCat.meow(); // Meow! My name is Gala.

myCat.name = "Pumpkin";

myCat.meow(); // Meow! My name is Pumpkin.

We don’t have any privacy! All properties are public. We can try to mitigate this with closures:

class Cat {
  constructor(name) {
    this.getName = function () {
      return name;
    };
  }

  meow() {
    console.log(`Meow! My name is ${this.name}.`);
  }
}

const myCat = new Cat("Gala");

myCat.meow(); // Meow! My name is undefined.

Oops, now this.name is undefined outside the constructor’s scope. We have to change this.name to this.getName() so it can work properly.

class Cat {
  constructor(name) {
    this.getName = function () {
      return name;
    };
  }

  meow() {
    console.log(`Meow! My name is ${this.getName()}.`);
  }
}

const myCat = new Cat("Gala");

myCat.meow(); // Meow! My name is Gala.

This is with only one argument, so you can imagine how unnecessarily repetitive our code would be the more arguments we add. Besides, we can still modify our object methods:

myCat.meow = function () {
  console.log(`Meow! ${this.getName()} is a bad kitten.`);
};

myCat.meow(); // Meow! Gala is a bad kitten.

We can save and implement better privacy if we use our own function constructors and even make our methods immutable using Object.freeze()!

function catConstructor(name) {
  function getName() {
    return name;
  }

  function meow() {
    console.log(`Meow! My name is ${name}.`);
  }

  return Object.freeze({
    getName,

    meow,
  });
}

const myCat = catConstructor("Loaf");

myCat.meow(); // Meow! My name is Loaf.

And trying to modify the object’s methods will fail silently.

myCat.meow = function () {
  console.log(`Meow! ${this.getName()} is a bad Kitten.`);
};

myCat.meow(); // Meow! My name is Loaf.

And yes, I am aware of the recent proposal for private class fields. But do we really need even more new syntax when we could accomplish the same using custom constructor functions and closures?

So, Classes Or Prototypes In JavaScript?

In Crockford’s more recent book, How JavaScript Works (PDF), we can see a better option than using Prototypes or Classes for code reuse: Composition!

Using prototypes feels like using a half-finished feature, while classes can lead to overcomplicated and unnecessary hierarchies (and also to this ). Fortunately, JavaScript is a multi-paradigm language, and forcing ourselves to only use classes or prototypes for code reusability is constraining ourselves with imaginary ropes.

As Crockford says in his more recent book:

“[I]nstead of same as except we can get a little bit of this and a little bit of that.”

— Douglas Crockford, How JavaScript Works

Instead of a function constructor or class inheriting from another, we can have a set of constructors and combine them when needed to create a specialized object.

function speakerConstructor(name, message) {
  function talk() {
    return Hi, mi name is ${name} and I want to tell something: ${message}.;
  }

  return Object.freeze({
    talk,
  });
}

function loudSpeakerConstructor(name, message) {
  const {talk} = speakerConstructor(name, message);

  function yell() {
    return talk().toUpperCase();
  }

  return Object.freeze({
    talk,

    yell,
  });
}

const mySpeaker = loudSpeakerConstructor("Juan", "You look nice!");

mySpeaker.talk(); // Hi, my name is Juan and I want to tell something: You look nice!

mySpeaker.yell(); // HI, MY NAME IS JUAN AND I WANT TO TELL SOMETHING: YOU LOOK NICE!

Without the need for this and new and classes or prototypes, we achieve a reusable function constructor with full privacy and encapsulation.

Conclusion

Yes, JavaScript was made in 10 days in a rush; yes, it was tainted by marketing; and yes, it has a long set of useless and dangerous parts. Yet is a beautiful language and fuels a lot of the innovation happening in web development today, so it clearly has done something good!

I don’t think we will see a day when prototypes receive the features they deserve, nor one in which we stop using classical syntactic sugar, but we can decide to avoid them when possible.

Unfortunately, this conscious decision to stick to the good parts isn’t exclusive to JavaScript OOP since, between the rush into existence, the language brought a lot of other dubious features that we are better off not using. Maybe we can tackle them in a future article, but in the meantime, we will have to acknowledge their presence and make the conscious decision to keep learning and understanding the language to know which parts to use and which parts to ignore.

References

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[Recovering Deleted Files From Your Git Working Tree]]> https://smashingmagazine.com/2023/12/recovering-deleted-files-git-working-tree/ https://smashingmagazine.com/2023/12/recovering-deleted-files-git-working-tree/ Fri, 01 Dec 2023 10:00:00 GMT There are times when mistakes happen, and useful and important files are deleted by error or lost from your file system irrevocably (or seemingly, at least). Version control systems make it difficult to permanently lose files, provided they have been either added to staging or committed to a remote repository, because Git allows you to undo or revert changes and access previous versions of the saved files.

It is also possible to erroneously erase files from both the working directory and the Git repository. I’ve certainly done that! I imagine you have, too, if you’re reading this, and if that’s the case, then you will need a way to recover those files.

I have a few methods and strategies you can use to recover your deleted files. Some are more obvious than others, and some are designed for very specific situations. And while it is indeed possible to irrevocably lose a file, even then, you may have a path to at least recover a copy of it with third-party software if it comes to that.

How Git Works With Files

Before we dive into all of that, let’s explore how your files journey from your local computer to your remote repository.

Your files are initially only located on your computer’s storage, known as your working tree or working directory, and Git has no idea they exist yet. At this point, they are at their most vulnerable state since they are untracked.

Adding files to the staging area — also known as the index — so that Git is aware of them is what the git add <filename> (or git add -A for all files) is for. What actually happens under the hood when pushing files to staging is that Git hashes the content and creates a blob for each file based on the file’s content and proceeds to store them in the /objects subdirectory located at .git/objects. Run git status to confirm that the files you want to commit have been added to your staging area.

Once the files are staged, Git is at least aware of them, and we can include them in commits. When including a file in a commit, Git creates a new tree object to represent the state of the repository at the time the commit happens. The tree object contains the following information:

  • SHA-1 hash of the tree object that represents the state of the repository;
  • SHA-1 hash of the commit’s parent commit object if it has a parent;
  • Author and committer information;
  • Commit message.

It’s at this point that the files are git push-ed to the remote repo, wherever you happen to be hosting it, whether it’s GitHub, Beanstalk, Bitbucket, or whatever.

How Files Can Get Deleted From A Working Tree

So, the key pieces we’re talking about are your project’s working tree, staging area and commit. It is possible for files to be deleted at any one of these points, but it’s the working tree where it is most irreversible, or at least tough, to restore a lost file.

There are some very specific Git commands or actions that tend to be the biggest culprits when a file is deleted from the working tree.

git rm

I’m sure you have seen this one before. It’s a command for removing (rm) files from the working tree. It might be the most commonly used command for deleting files.

git reset

Anytime a reset happens, it’s very possible to lose any files you’ve been working on. But there are two types of Git resets that make this possible:

  1. git reset --hard
    This command is sort of a nuclear path for resetting a working tree and the staging area. If you’ve made any changes to tracked files, those will be lost. That goes for commits, too, which are discarded altogether. In fact, any files or directories that are not in the HEAD commit are removed from the working tree.
  2. git reset <filename>
    This is a lot less damaging than a hard reset, but it does indeed remove the specified file from the working tree. But it’s worth mentioning that the file is not pulled out from the staging area. So there’s a path back, which we’ll get to.

git clean

This removes untracked files from the working tree. Untracked files are not in the Git staging area and are not really part of the repository. They’re typically temporary files or files that have not yet been added to the repository.

One key distinction with a clean command is that it will not remove files that are included in a project’s .gitignore file, nor will it remove files that have been added to the staging area, nor ones that have already been committed. This can be useful for cleaning up your working tree after you have finished working on a project and you want to remove all of the temporary files that you created.

Like git reset, there are different variations of git clean that remove files in different ways:

  • git clean <filename>
    Used to remove specific files from the working tree.
  • git clean -d
    Removes untracked files from a specific directory.
  • git clean -i
    This one interactively removes files from the working tree. And by that, I mean you will be prompted to confirm removal before it happens, which is a nice safeguard against accidents.
  • git clean -n
    This is a dry run option and will show you the files that would be removed if you were to run the original git clean command. In other words, it doesn’t actually remove anything but lets you know what would be removed if you were to run an actual clean.
  • git clean -f
    This one forces the git clean command to remove all untracked files from the working tree, even if they are ignored by the .gitignore file. It’s pretty heavy-handed.
  • git clean -f -d
    Running this command is a lot like git clean --f but wipes out directories as well.
  • git clean -x
    This removes all untracked files, including build products. It is best used when you want to wipe your working tree clean and test a fresh build.
  • git clean -X
    This only removes files ignored by git.

Of course, I’m merely summarizing what you can already find in Git’s documentation. That’s where you can get the best information about the specific details and nuances of git clean and its variants.

Manually Removing Files

Yes, it’s possible! You can manually delete the files and directories from your working tree using your computer’s file manager. The good news, however, is that this will not remove the files from the staging area. Also, it’s quite possible you can undo that action with a simple CMD + Z/CTRL + Z if no other action has happened.

It is important to note that manually removing files from the working tree is a destructive operation. Once you have removed a file from the working tree that has not been added to a commit, it is almost impossible to undo the operation completely from a Git perspective. As a result, it is crucial to make sure that you really want to remove a file before you go this route.

But mistakes happen! So, let’s look at a variety of commands, strategies, and — if needed — apps that could reasonably recover deleted files from a working directory.

How Files Can Be Recovered After Being Deleted

Git commands like git checkout, git reset, git restore, and git reflog can be helpful for restoring files that you have either previously added to the staging area or committed to your repository.

git checkout

If you have not committed the changes that deleted the files and directories, then you can use the git checkout command to checkout a previous commit, branch, or tag. This will overwrite the working tree with the contents of the specific commit, branch, or tag, and any deleted files and directories will be restored.

git checkout HEAD~ <filename>

That will take things back to the last commit that was made. But let’s say you’ve made several commits since the file was deleted. If that’s the case, try checking out a specific commit by providing that commit’s hash:

git checkout <commit-hash> <filename>

Oh, you’re not sure which file it is, or there are more files than you want to type out? You can check out the entire working tree by committing the filename:

git checkout <commit-hash>

git reset

If you have committed the changes that deleted the files and directories, then you can use the git reset command to reset the HEAD pointer to a previous commit. This will also overwrite the working tree with the contents of the specific commit, and any deleted files and directories will be restored in the process.

git reset <commit-hash>

git restore

If you want to restore deleted files and directories without overwriting the working tree, then you can use the git restore command. This command restores files and directories deleted from the staging area or the working tree. Note that it only works for tracked files, meaning that any files that weren’t git add-ed to the working tree are excluded.

git restore --staged <filename>

To jump back one commit, you could go back to the --worktree instead of the staging area:

git restore --worktree <filename>

And, of course, leave out the filename if you want to restore all files in the working tree from the previous commit:

git restore --worktree

Another option is to restore all of the files in the current directory:

git restore .

git reflog

There’s also the git reflog command, which shows a history of all recent HEAD movements. I like this as a way to identify the commit that you want to checkout or reset to.

git reflog
Last Resorts

When files that are neither present in the staging area nor committed are deleted from the working tree, it is commonly accepted that those files are gone forever — or oti lor as we say in Yoruba — without any hope of recovery. So, if for any reason or by error, you delete important files from your project’s working tree without ensuring that they are either in the staging area or have been previously committed, then you may be thinking all hope of getting them back is lost.

But I can assure you, based on my experiences in this situation, that it is usually possible to recover all or most of a project’s lost files. There are two approaches I normally take.

File Recovery Apps

File recovery tools can recover lost or deleted data from your storage devices. They work by running a deep scan of your device in an attempt to find every file and folder that has ever existed on your storage device, including deleted and lost files and folders. Once the files have all been found, you can then use the data recovery tool to restore/recover the files of your choice to a new location.

Note: Some of the deleted and lost files found may be corrupted and damaged or not found at all, but I am certain from my experience using them that the majority will be found without any corruption or damage.

There are a variety of file recovery tools available, and the “right” one is largely a subjective matter. I could spend an entire post exclusively on the various options, but I’ve selected a few that I have used and feel comfortable at least suggesting as options to look into.

Wondershare Recoverit is capable of recovering more than 1,000 file formats. Its free tier option allows you to run a scan to find files on your computer’s storage, but to actually recover the files, you will have to do a paid upgrade to one of its paid plans starting at a $69.99 annual subscription or a one-time $119.99 license. There’s a premium plan for more enhanced recovery methods for things like videos and files, as well as fixing corrupted files that go well beyond the basic need of recovering a single lost file.

  • Pros: High success rate, free tech support, allows partition recovery.
  • Cons: Free tier is extremely limited.

EaseUS Data Recovery Wizard is perhaps one of the most popular tools out of what’s available. Its free tier option is quite robust, running a deep scan and recovering up to 2GB of data. The difference between that and its paid subscription (starting at $119.95 per year, $169.95 lifetime) is that the paid tier recovers an unlimited amount of data.

  • Pros: Fast deep scans, file preview before recovery, easy to use, generous free tier.
  • Cons: Paid plans are significantly more expensive than other tools, Windows and macOS versions are vastly different, and the macOS software is even more expensive.

DM Disk Editor (DMDE) makes use of a special algorithm that reconstructs directory structures and recovers files by their file signature when recovering solely by the file system proves impossible. DMDE also offers a free tier option, but it is quite limited as you can only recover files from the directory you have selected, and it only recovers up to 4,000 files at a time. Compare that to its paid versions that allow unlimited and unrestricted data recovery. Paid plans start at $20 per year but scale up to $133 per year for more advanced needs that are likely beyond the scope of what you need.

  • Pros: High recovery success rate, generous free tier, reasonable paid tiers if needed.
  • Cons: I personally find the UI to be more difficult to navigate than other apps.
Software Operating Systems supported Starting price File types and formats supported
Wondershare Recoverit Windows, Mac, Linux(Premium) $69.99/year 1000+ file types and formats
EaseUS Windows, Mac $99.95/year (Windows), $119.95/year (Mac) 1000+ file types and formats
DMDE Windows, Mac, Linux, DOS $20/year Supports basic file formats. Does not support raw photo files.

As I said, there are many, many more options out there. If you’re reading this and have a favorite app that you use to recover lost files, then please share it in the comments. The more, the merrier!

Last Resort: git fsck

First off, the git fsck command can be dangerous if used incorrectly. It is essential to make sure that you understand how to use the command before using it to recover files from the working tree. If you are unsure how to proceed after reading this section, then it is a good idea to consult the Git documentation for additional details on how it is used and when it is best to use it.

That said, git fsck can indeed recover files lost from the working tree in Git and maybe your absolute last resort. It works by scanning the Git repository for “dangling” objects, which are objects that are not referenced by any commit. The Git docs define it like this:

dangling object:

“An unreachable object that is not reachable even from other unreachable objects; a dangling object has no references to it from any reference or object in the repository.”

This can happen if a file is deleted from the working tree but not committed or if a branch is deleted, but the files on the branch are not deleted.

To recover files lost from the working tree using the git fsck command, follow these steps:

  • Run git fsck –lost-found, which is a special mode of the git fsck command.
    It creates a directory called .git/lost-found and moves all of the lost objects to that directory. The lost objects are organized into two subdirectories: commits and objects. The /commits subdirectory contains lost commits, and the /objects subdirectory contains lost blobs, trees, and tags. This command prints the dangling objects (blobs, commits, trees, and tags) if they exist.

  • Run the git show <dangling_object_hash> command for each dangling object that is printed.
    This will print the content of the object and enable you to see the original content of the hashed object so you can identify the dangling objects in the case of files dangling blobs that correspond to the files that you want to recover.
  • To recover a dangling object, you can manually copy the content of the printed in the console when you run the git show <dangling_object_hash> command or run git show <dangling_object_hash> > <filename> command to save the content of the hashed object to the file you specified in the command. You can also use the git checkout <dangling_object_hash> command to restore the file to the working tree.

Once you have recovered the files that you want to recover, you can commit the changes to the Git repository as if nothing ever happened. Phew! But again, I only advise this approach if you’ve tried everything else and are absolutely at your last resort.

Conclusion

Now that you know how to recover files lost from your working tree, your mind should be relatively at ease whenever or if ever you find yourself in this unfortunate situation. Remember, there’s a good chance to recover a file that may have been accidentally deleted from a project.

That said, a better plan is to prevent being in this situation in the first place. Here are some tips that will help you prevent ending up almost irrevocably losing files from your working tree:

  • Commit your files to your Git repository and remote servers as quickly and as often as you create or make changes to them.
    There is no such thing as a “too small” commit.
  • Routinely create backups of your project files.
    This will help you recover your files if you accidentally delete them or your computer crashes.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Oluwasanmi Akande)
<![CDATA[Cold Days, Shining Lights (December 2023 Wallpapers Edition)]]> https://smashingmagazine.com/2023/11/desktop-wallpaper-calendars-december-2023/ https://smashingmagazine.com/2023/11/desktop-wallpaper-calendars-december-2023/ Thu, 30 Nov 2023 09:30:00 GMT As the year is coming to a close, many of us feel rushed, meeting deadlines, finishing off projects, or preparing for the upcoming holiday season. So how about some beautiful, wintery desktop wallpapers to cater for some fresh inspiration and get you in the mood for December (and the holidays, if you’re celebrating)?

As every month since more than twelve years already, artists and designers from across the globe once again got their ideas bubbling and created wallpaper designs to sweeten up your December. They come in versions with and without a calendar and can be downloaded for free. As a little bonus goodie, we also added a selection of December favorites from our wallpapers archives to the collection that are just waiting to be rediscovered, so maybe you’ll spot one of your almost-forgotten favorites in here, too.

A huge thank you to everyone who took on the challenge and shared their designs with us — this post wouldn’t exist without you! Happy December!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

Sweet Ride Into The Holidays

“December is here, and that means it’s time to celebrate National Cookie Day and embrace the festive spirit with our snowboarding cookie man! This delightful illustration captures the essence of the holidays — a time for indulging in our favorite treats, spreading joy, and creating unforgettable memories with loved ones. So, grab your favorite cookie, put on your coziest pajamas, and let the holiday cheer commence!” — Designed by PopArt Studio from Serbia.

Spread The Soul Of Christmas

Designed by Bhabna Basak from India.

Views Of The Alhambra

“This last month of the year, we wanted to put the focus on one of the most visited buildings in the world whose beauty is unmatched: the Alhambra in Granada. Together, from the Albaicín we will see this beauty and tour its gardens and rooms. We wish you a very merry Christmas and all the best for the coming year!” — Designed by Veronica Valenzuela Jimenez from Spain.

Energy Drink

Designed by Ricardo Gimenes from Sweden.

Go Green

“We’d love to invite you to our free Smashing Meets Goes Green on Thursday, December 7, to explore how we as designers and developers can make our world just a bit greener.” — Designed by Ricardo Gimenes from Sweden.

Dear Moon, Merry Christmas

Designed by Vlad Gerasimov from Georgia.

The House On The River Drina

“Since we often yearn for a peaceful and quiet place to work, we have found inspiration in the famous house on the River Drina in Bajina Bašta, Serbia. Wouldn’t it be great being in nature, away from the civilization, swaying in the wind and listening to the waves of the river smashing your house, having no neighbors to bother you? Not sure about the Internet, though…” — Designed by PopArt Studio from Serbia.

Cardinals In Snowfall

“During Christmas season, in the cold, colorless days of winter, Cardinal birds are seen as symbols of faith and warmth. In the part of America I live in, there is snowfall every December. While the snow is falling, I can see gorgeous Cardinals flying in and out of my patio. The intriguing color palette of the bright red of the Cardinals, the white of the flurries and the brown/black of dry twigs and fallen leaves on the snow-laden ground fascinates me a lot, and inspired me to create this quaint and sweet, hand-illustrated surface pattern design as I wait for the snowfall in my town!” — Designed by Gyaneshwari Dave from the United States.

Winter Coziness At Home

“Winter coziness that we all feel when we come home after spending some time outside or when we come to our parental home to celebrate Christmas inspired our designers. Home is the place where we can feel safe and sound, so we couldn’t help ourselves but create this calendar.” — Designed by MasterBundles from Ukraine.

Bat Christmas

Designed by Ricardo Gimenes from Sweden.

Enchanted Blizzard

“A seemingly forgotten world under the shade of winter glaze hides a moment where architecture meets fashion and change encounters steadiness.” — Designed by Ana Masnikosa from Belgrade, Serbia.

King Of Pop

Designed by Ricardo Gimenes from Sweden.

Winter Garphee

“Garphee’s flufiness glowing in the snow.” Designed by Razvan Garofeanu from Romania.

Joy To The World

“Joy to the world, all the boys and girls now, joy to the fishes in the deep blue sea, joy to you and me.” — Designed by Morgan Newnham from Boulder, Colorado.

Ninja Santa

Designed by Elise Vanoorbeek from Belgium.

Hot Hot Hot!

Designed by Ricardo Gimenes from Sweden.

Christmas Cookies

“Christmas is coming and a great way to share our love is by baking cookies.” — Designed by Maria Keller from Mexico.

Sweet Snowy Tenderness

“You know that warm feeling when you get to spend cold winter days in a snug, homey, relaxed atmosphere? Oh, yes, we love it, too! It is the sentiment we set our hearts on for the holiday season, and this sweet snowy tenderness is for all of us who adore watching the snowfall from our windows. Isn’t it romantic?” — Designed by PopArt Studio from Serbia.

All That Belongs To The Past

“Sometimes new beginnings make us revisit our favorite places or people from the past. We don’t visit them often because they remind us of the past but enjoy the brief reunion. Cheers to new beginnings in the new year!” Designed by Dorvan Davoudi from Canada.

Trailer Santa

“A mid-century modern Christmas scene outside the norm of snowflakes and winter landscapes.” Designed by Houndstooth from the United States.

Getting Hygge

“There’s no more special time for a fire than in the winter. Cozy blankets, warm beverages, and good company can make all the difference when the sun goes down. We’re all looking forward to generating some hygge this winter, so snuggle up and make some memories.” — Designed by The Hannon Group from Washington D.C.

December Through Different Eyes

“As a Belgian, December reminds me of snow, cosiness, winter, lights, and so on. However, in the Southern Hemisphere, it is summer at this time. With my illustration I wanted to show the different perspectives on December. I wish you all a Merry Christmas and Happy New Year!” — Designed by Jo Smets from Belgium.

Ice Flowers

“I took some photos during a very frosty and cold week before Christmas.” Designed by Anca Varsandan from Romania.

Bathtub Party Day

“December 5th is also known as Bathtub Party Day, which is why I wanted to visualize what celebrating this day could look like.” — Designed by Jonas Vanhamme from Belgium.

Silver Winter

Designed by Violeta Dabija from Moldova.

Christmas Fail

Designed by Elise Vanoorbeek from Belgium.

Catch Your Perfect Snowflake

“This time of year, people tend to dream big and expect miracles. Let your dreams come true!” Designed by Igor Izhik from Canada.

Dream What You Want To Do

“The year will end, hope the last month, you can do what you want to do, seize the time, cherish yourself, expect next year we will be better!” — Designed by Hong Zi-Qing from Taiwan.

Time For Reindeer, Snowflakes And Jingle Bells

“Christmas is a time you get homesick, even when you’re home! Christmas reminds me of Harry Potter and his holidays when he would be longing to visit the Weasleys and have a Christmas feast with them at their table! The snowflakes, the Christmas tree, bundles of presents, and the lip smacking feast all gives you a reason to celebrate and stay happy amidst all odds! Life is all about celebration! Christmas is a reason to share the joy of happiness, peace and love with all, your near and dear ones.” — Designed by Acodez IT Solutions from India.

Winter Wonderland

“‘Winter is the time for comfort, for good food and warmth, for the touch of a friendly hand and for a talk beside the fire: it is the time for home.’ (Edith Sitwell) — Designed by Dipanjan Karmakar from India.

Cold Outside

“In December it is cold outside, so cute giraffe with scarf.” — Designed by Kim Lemin from Belgium.

Christmas Lights Under The Sea

“Jellyfish always reminded me of Christmas because of the shiny magic they create. Lights of hope in the deep blue sea.” — Designed by Marko Stupić from Zagreb, Croatia.

Christmas Time

Designed by Sofie Keirsmaekers from Belgium.

Happy Holidays

Designed by Ricardo Gimenes from Sweden.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Crafting A Killer Brand Identity For A Digital Product]]> https://smashingmagazine.com/2023/11/crafting-killer-brand-identity-digital-product/ https://smashingmagazine.com/2023/11/crafting-killer-brand-identity-digital-product/ Tue, 28 Nov 2023 10:00:00 GMT It may seem obvious to state that the brand should be properly reflected and showcased in all applications, whether it is an app, a website, or social media, but it’s surprising how many well-established businesses fall short in this regard. Maintaining brand consistency can be a challenge for many teams due to issues like miscommunication, disconnect between graphic and web design teams, and management missteps.

Establishing a well-defined digital brand identity that includes elements like logos, typography, color palettes, imagery, and more ensures that your brand maintains a consistent presence. This consistency not only builds trust and loyalty among your customers but also allows them to instantly recognize any of your interfaces or digital communication.

It’s also about creating an identity for your digital experience that’s a natural and cohesive extension of the overall visual style. Think of Headspace, Figma, or Nike, for example. Their distinctive look and feel are instantly recognizable wherever you encounter them.

Maintaining visual consistency also yields measurable revenue results. As per a Lucidpress survey, 68% of company stakeholders credit 10% to more than 20% of revenue growth to the consistency of their brand. This underscores the significance of initiating and integrating design systems to promote brand consistency.

Brand Strategy

In an ideal world, every new product would kick off with a well-thought-out brand strategy. This means defining the vision, mission, purpose, positioning, and value proposition in the market before diving into design work. While brand strategy predominantly addresses intangible aspects, it’s one of the fundamental cornerstones of a brand, alongside visuals like the logo and website. Even with stunning design, a brand can stumble in the market if its positioning isn’t unique or if the company is unsure about what it truly represents.

However, let’s face it, we often don’t have this luxury. Tight timelines, limited budgets, and stakeholders who might not fully grasp the value of a brand strategy can pose challenges. We all live in the real world, after all. In such cases, the best approach is either to encourage the stakeholders to articulate their brand strategy elements or to work collaboratively to uncover them through a series of workshops.

Defining a brand’s core strategy is the crucial starting point, establishing the foundation for future design work. The brand strategy serves as a robust framework that shapes every aspect of a brand’s presence, whether it be in marketing, on the web, or within applications.

Brand Identity Research

The research phase is where you unearth the insights to distinguish yourself in the vast arena of competitors. By meticulously analyzing consumer trends, design tendencies, and industry landscapes, you gain a deeper understanding of the unique elements that can set your brand apart. This process not only provides a solid foundation for strategic decision-making but also unveils valuable opportunities for innovation and differentiation.

Typically, the research initiates with an analysis of the brand’s existing visual style, especially if it’s already established. This initial exploration serves as a valuable starting point for team discussions, facilitating a comprehensive understanding of what aspects are effective and what needs refinement.

Moving on, the next crucial step involves conducting a comprehensive industry analysis. This process entails an examination of key brand elements, such as logos, colors, and other design components utilized by competitors. This step serves as the strategic guide for making precise design decisions.

When performing industry analysis as part of brand research, aim for specificity rather than generic observations. For instance, when crafting a brand identity for a product brand, a focused investigation into app icon designs becomes imperative. The differentiation of colors among various apps emerges as a potent tool in this endeavor. According to a study conducted by the Pantone Color Institute, color plays a pivotal role, boosting brand recognition by a substantial 80%.

Moreover, it’s essential to consider app icon designs that, while not directly competitors, are ubiquitous on most phones (examples include Google apps, Chrome/Safari, Facebook, Twitter, and so on). It’s crucial that your app icon stands out distinctly, even in comparison to these widely used icons.

To bring in more innovation and fire up creativity in future designs, it’s a good call to widen your scope beyond just your competition. For instance, if sustainability is a core value of the brand, conducting a thorough examination of how brands outside the industry express and visually communicate this commitment becomes pivotal. Similarly, exploring each value outlined in the brand strategy can provide valuable insights for future design considerations.

A highly effective method for presenting research findings is to consolidate all outcomes in one document and engage in comprehensive discussions, whether in-house or with the client. In my practice, research typically serves as a reference document, offering a reliable source for revisiting and reassessing the uniqueness of our design choices and verifying alignment with identified needs. It’s also a perfect argument point for design choices made in the following phase. Essentially, the research phase functions as the guide steering the brand toward a distinctive and unique look and feel.

Brand Identity Concepts

Now, to the fun part — crafting the actual visuals for the brand. A brand concept is a unifying idea or theme. It’s an abstract articulation of the brand’s essence, an overarching idea that engages and influences the audience. Brand concepts sometimes come up organically through the brand strategy; some of them need elaborate effort and a deep search for inspiration.

There are various methods to generate unique brand ideas that seamlessly connect brand strategy, meanings, and visuals.

The ultimate goal of any brand is to establish a strong emotional connection with users, which makes having a powerful brand idea that permeates the identity, website, and app crucial.

  • Mind mapping and association
    This is a widely used approach, though it looks quite different between various creatives. Start with a diagram listing the brand’s key attributes and features. Then, build on these words with your associations. A typical mind map for me is like a tangled web of brand values, mission, visual drafts, messaging, references, and sketches, all mixed in with associations. Try to blend ideas from totally different parts of the mind map. For example, in the design of an identity for a database company, contrary segments of the mind map may include infinity symbols and bytes. The combination of these elements results in a unique design scalable to both the brand symbol and identity.

  • Reversal
    Sometimes, a brand calls for an unexpected symbol or even a mascot that doesn’t have direct ties to the industry, for example, using a monkey as the symbol for a mailing platform or a bird for a wealth management app. Delving into the process of drawing parallels between unrelated objects, engaging all our senses, and embracing a creative and randomized approach helps to generate fresh and innovative concepts.
  • Random stimuli
    There are instances when tapping into randomness can significantly boost creativity. This approach may involve anything from AI-generated concepts to team brainstorming sessions that incorporate idea shuffling and combinations, often resulting in surprising and inventive ideas.

  • Real-world references
    Designers can sometimes find themselves too deeply immersed in their design bubble. Exploring historical references, natural patterns, or influences from the tangible world can yield valuable insights for a project. It’s essential not to confine yourself solely to your workspace and Pinterest mood boards. This is particularly relevant in identity design, where these tangible parallels can provide rich sources of inspiration and meaning.

Imagine I’m crafting an identity for the adventure tours app. The last place to seek inspiration from is other tour companies. Why? Because if the references are derivatives, the work will be too. Begin at the roots. Adventure tours are all about tapping into nature and connecting with your origins. The exploration would kick off by delving into the elements of nature. What sights, smells, sounds, and sensory details do these adventures offer?

That’s the essence that both clients and non-designers appreciate the most — finding tangible connections in the real world. When people can connect not just aesthetically but also emotionally or intellectually to the visuals, they become much more loyal to the brand.

Condense your design concept and ideas to highlight brand identity across diverse yet fitting contexts. Go beyond conventional applications like banners and business cards.

If you’re conceptualizing brand identity for restaurant management software, explore ways to brand the virtual payment card or create merchandise for restaurant employees. When crafting a style for a new video conferencing app, consider integrating the brand seamlessly into commonly used features, such as the ‘call’ button, and think of a way to brand the interface so that users can easily recognize your app in screenshots. Consider these aspects as you move through this project phase. Plus, taking a closer look at the industry can spark some creative ideas and bring a more down-to-earth feel to the concept.

Once the core brand visual concept gains approval and the general visual direction becomes clear, it’s time to create the assets for the brand application. It’s essential to note that no UI work should commence until the core brand identity elements, such as the logo, colors, typography, and imagery style, are developed.

Brand Identity Design And Key Assets For A Digital Product

Now, let’s delve into how you actually apply the brand identity to your interfaces and what the UI team should anticipate from the brand identity design team. One key thing to keep in mind is that users come to a website or app to get things done, not just to admire the visuals. So, we need to let them accomplish their tasks smoothly and subtly weave our brand’s visual identity into their experience.

This section lists assets, along with specific details tailored for digital applications, to make sure your UI colleagues have all they need for a smooth integration of the brand identity into the digital product.

Logo

When crafting a logo for a digital product, it’s essential to ensure that the symbol remains crisp and scalable at any dimension. Even if your symbol boasts exceptional distinctiveness, you’ll frequently require a simplified, compact version suitable for mobile use and applications like app icons or social media profile pictures. In these compact logo versions, the details take on added prominence, with negative space coming to the forefront.

Additionally, it’s highly advisable to create a compact version not just for the symbol but also for the wordmark. In such instances, you’ll typically find a taller x-height, more open apertures, and wider spacing.

One logo approach is pairing a logotype with a standalone symbol. The alternative is to feature just the logotype, incorporating a distinctive detail that can serve as an app icon or avatar. The crucial point is to maintain a strong association with the main logo. To illustrate this point, consider the example of Bolt and how they ingeniously incorporated the negative space in their logo to create a lightning symbol.

Another factor to take into account is to maintain square-like proportions for your logomark. This ensures that the logomark can be seamlessly integrated into common digital applications such as app icons, favicons, and profile pictures without appearing awkward or unbalanced within these placeholders. Ensure your logomark isn’t overly horizontal or vertical to maximize its impact across all digital platforms.

The logo and symbol are core assets of any digital brand. Typically, the logotype letter shapes take roots from the primary font.

Typography

Typography plays a pivotal role in shaping a brand’s identity. The selection of a typeface is particularly crucial in the brand identity design phase. Unfortunately, the needs of the UI/UX team are sometimes overlooked by the brand team, especially when dealing with complex products. Typography assets can be categorized into several key components:

Primary Font

Choosing the right typeface can be a challenging task, and finding a distinctive one can be even trickier. Beyond stylistic elements like serifs, non-serifs, and extended or condensed styles, selecting a primary font for a digital product involves considering various requirements. Questions to ponder include the following:

  • How many languages will your product support?
  • Will the brand use special symbols such as arrows, currency symbols, or math symbols?
  • What level of readability will the headings need, and what will be the smallest point size the headings are used at?

Body Font

Selecting the body font for a digital product demands meticulous attention. This decision can significantly impact readability and, as a result, user loyalty, especially in data-rich environments like dashboards and apps that contain numerals, text, and spreadsheets. Designers must be attentive and responsible intermediaries between users and data. Factors to consider include the following:

  • Typeface’s x-height,
  • Simplified appearance,
  • Legibility at small sizes,
  • Low or no contrast to prevent readability issues.

Fonts with large apertures and open shapes are preferable to keep similar letters distinct, such as ‘c’ and ‘o’. Increased letter spacing can enhance legibility, and typefaces should include both regular and monospaced digits. Special symbols like currency or arrows should also be considered for the brand use.

Fallback fonts

In the realm of digital branding, there will be countless situations where you may need to substitute your fonts. This can include using a body font for iOS or Android apps to save on expensive licensing costs, creating customizations for various countries and scripts, or adapting fonts for other platforms. The flexibility of having fallback fonts is invaluable in ensuring consistent brand representation across diverse digital touchpoints.

Layout Principles

Typography isn’t just about choosing fonts; it’s also about arranging them to reflect the brand uniquely. Employing the same fonts but arranging them differently can distort the brand perception.

Getting the right layout is all about finding that sweet spot between your brand’s vibe and the ever-changing design scene.

When crafting a layout, one can choose from various types that translate different brand voices. Grid-based layouts, for instance, leverage a system of rows and columns to instill order, balance, and harmony by organizing and aligning elements. Asymmetrical layouts, on the other hand, rely on contrast, tension, and movement to yield dynamic and expressive designs that command attention. Modular layouts utilize blocks or modules, fostering flexibility and adaptability while maintaining variety, hierarchy, and structure. Choosing one of the types or creating a hybrid can effectively convey your brand identity and message.

Attention to technical details is crucial, including line spacing, consistent borders, text density, and contrast between text sizes. Text alignment should be clearly defined. Creating a layout that accurately represents your brand requires applying design principles that designers intuitively understand, even if others may sense them without articulating why.

Color

Color is undoubtedly one of the most significant elements for any identity, extending beyond products and digital realms. While a unique primary color palette is vital, it is important to recognize that color is not just an aesthetic aspect but a crucial tool for usability and functionality within the brand and the product. This chapter highlights key areas often overlooked during the brand design process.

  • Call to Action(CTA) Color
    Brand designers frequently present an extensive palette with impressive color combinations, but this can leave UI designers unsure of the appropriate choice for Call to Action (CTA) elements. It is imperative to establish a primary CTA color at the brand identity design phase. This color must have a good contrast both with light and dark backgrounds and does not unintentionally trigger associations, such as red for errors or yellow for alerts.
  • Contrast
    Brand identity tends to have more flexibility compared to the strict industry standards for legibility and contrast in screens and interfaces. Nevertheless, brand designers should always evaluate contrast and adhere to WCAG accessibility standards, too. The degree of contrast should be determined based on factors like target audience demographics and potential usage scenarios, aiming for at least AA compliance. Making your product accessible to all is a noble mission that can enhance your brand’s meaning.

  • Extended Color Palette
    To effectively implement color in user interfaces, UI designers need not only the primary colors but also various tints of these colors to indicate different UI statuses. Semantic colors, like red for caution and green for positive connotations, are valuable tools for emphasizing critical elements or providing quick visual feedback. Tints of CTA colors and other hues are essential for indicating states such as hover, click, or visited elements. It’s preferable to define these nuanced details in the brand identity guideline. This ensures the uniform use of the same colors, whether in marketing materials or the product interface.

  • Color proportions and usage
    The proportion and use of colors have a substantial impact on how a brand is perceived. Usually, the brand’s primary color should serve as an accent rather than dominate most layouts, especially in the interface. Collaborating with UI colleagues to establish a color usage chart can help strike the right balance. Varying the proportions of colors creates visual interest and allows you to set the mood and energy level in a design or illustration by choosing dominant or accent colors wisely.

  • Color compensation
    Colors may appear differently on dark and light backgrounds and might lose contrast when transitioning between dark and light themes. In the modern context of platforms offering both dark and light UI versions, this factor should be considered not only for interface elements but also for logos and logomarks. Logos designed for light backgrounds typically have slightly higher brightness and saturation, while logos for dark backgrounds are less bright.

Color leads to another component of brand identity — the illustration style — where its extensive application in various shades plays a significant role both emotionally and visually.

Scalable Illustration System

An illustration style includes pictograms, icons, and full-scale illustrations. It’s important to keep the whole system intact and maintain consistency throughout the project, ensuring a strong connection with the brand and its assets. This consistency in the illustration system also enhances the user interface’s intuitiveness.

In the context of an illustration system, “style” refers to a collection of construction techniques combined into a unified system. Pictograms and icon systems are made of consistent and reusable elements, and attention to detail is crucial to achieving uniformity. This includes sticking to the pixel grid, ensuring pixel precision, maintaining composition, using consistent rounding, making optical adjustments for intersecting lines, and keeping the thickness consistent.

Stylistically, illustrations can employ a broader arsenal compared to icons, utilizing a wider range of features such as incorporating depth effects, a broader color palette, gradients, blur, and other effects. While pictograms and icons serve more utilitarian purposes, illustrations have the unique ability to convey a deeper emotional connection to the brand. They play a crucial role in strengthening and confirming the brand message and positioning.

The elements discussed in the article play a vital role in enabling the web team to craft the UI kit and contribute to the brand’s success in the digital space. Supplying these assets is the critical minimum. Ensure that all components of your brand are clearly outlined and explained. It’s advisable to establish a guideline consolidating all the rules in one workspace with UI designers (commonly in Figma), facilitating seamless collaboration. Additionally, the brand designer should oversee the UI kit used in interfaces to ensure alignment with all identity components.

Uniform Mechanism

Your digital brand should effectively communicate your broader brand identity, leaving no room for doubt about your values and positioning. The brand acts as the cornerstone, ensuring consistency in the digital product. A well-designed digital product seamlessly integrates all its components, resulting in a cohesive user experience that enhances user loyalty.

Ensure you maintain effective communication with the UI team throughout the whole project. From my experience, despite things appearing straightforward in the brand guidelines and easy to implement, misunderstandings can still occur between the brand identity team and the UI team. Common challenges, such as letter spacing in brand typography, can arise.

The consistent and seamless integration of brand elements into the UI design ensures the brand’s effectiveness. Whether you have a small or large design team, whether it’s in-house or external, incorporating branding into your digital product development is crucial for achieving better results. Remember, while a brand can exist independently, a product cannot thrive without branding.

]]>
hello@smashingmagazine.com (Sasha Denisova)
<![CDATA[A Few Ways CSS Is Easier To Write In 2023]]> https://smashingmagazine.com/2023/11/few-ways-css-easier-write-2023/ https://smashingmagazine.com/2023/11/few-ways-css-easier-write-2023/ Fri, 24 Nov 2023 08:00:00 GMT A little while back, I poked at a number of “modern” CSS features and openly evaluated whether or not they have really influenced the way I write styles.

Spoiler alert: The answer is not much. Some, but not to the extent that the styles I write today would look foreign when held side-by-side with a stylesheet from two or three years ago.

That was a fun thought process but more academic than practicum. As I continue thinking about how I approach CSS today, I’m realizing that the differences are a lot more subtle than I may have expected — or have even noticed.

CSS has gotten easier to write than it is different to write.

And that’s not because of one hot screaming new feature that changes everything — say, Cascade Layers or new color spaces — but how many of the new features work together to make my styles more succinct, resilient, and even slightly defensive.

Let me explain.

Efficient Style Groups

Here’s a quick hit. Rather than chaining :hover and :focus states together with comma separation, using the newer :is() pseudo-class makes it a more readable one-liner:

/* Tradition */
a:hover,
a:focus {
  /* Styles */
}

/* More readable */
a:is(:hover, :focus) {
  /* Styles */
}

I say “more readable” because it’s not exactly more efficient. I simply like how it reads as normal conversation: An anchor that is in hover or in focus is styled like this...

Of course, :is() can most definitely make for a more efficient selector. Rather than make up some crazy example, you can check out MDN’s example to see the efficiency powers of :is() and rejoice.

Centering

This is a classic example, right? The “traditional” approach for aligning an element in the center of its parent container was usually a no-brainer, so to speak. We reached for some variety of margin: auto to push an element from all sides inward until it sits plumb in the middle.

That’s still an extremely effective solution for centering, as the margin shorthand looks at every direction. But say we only need to work in the inline direction, as in left and right, when working in a default horizontal left-to-write writing mode. That’s where the “traditional” approach falls apart a bit.

/* Traditional */
margin-left: auto;
margin-right: auto;

Maybe “falls apart” is heavy-handed. It’s more that it requires dropping the versatile margin shorthand and reaching specifically for two of its constituent properties, adding up to one more line of overhead. But, thanks to the concept of logical properties, we get two more shorthands of the margin variety: one for the block direction and one for the inline direction. So, going back to a situation where centering only needs to happen in the inline direction, we now have this to keep things efficient:

/* Easier! */
margin-inline: auto;

And you know what else? The simple fact that this example makes the subtle transition from physical properties to logical ones means that this little snippet is both as equally efficient as throwing margin: auto out there and resilient to changes in the writing mode. If the page suddenly finds itself in a vertical right-to-left mode, it still holds up by automatically centering the element in the inline direction when the inline direction flows up and down rather than left and right.

Adjusting For Writing Modes, In General

I’ve already extolled the virtues of logical properties. They actually may influence how I write CSS today more than any other CSS feature since Flexbox and CSS Grid.

I certainly believe logical properties don’t get the credit they deserve, likely because document flow is a lot less exciting than, say, things like custom properties and container queries.

Traditionally, we might write one set of styles for whatever is the “normal” writing direction, then target the writing mode on the HTML level using [dir="rtl"] or whatever. Today, though, forget all that and use logical properties instead. That way, the layout follows the writing mode!

So, where we may normally need to reset a physical margin when changing writing modes like this:

/* Traditional */
body {
  margin-left: 1rem;
}

body[dir="rtl"] {
  margin-left: 0; /* reset left margin */
  margin-right: 1rem; /* apply to the right */
  text-align: right; /* push text to the other side */
}

... there’s no longer a need to rest things as long as we’re working with logical properties:

/* Much easier! */
body {
  margin-inline-start: 1rem;
}
Trimming Superfluous Spacing

Here’s another common pattern. I’m sure you’ve used an unordered list of links inside of a <nav> for the main or global navigation of a project.

<nav>
  <ul>
    <li><a href="/products">Products</a></li>
    <li><a href="/products">Services</a></li>
    <li><a href="/products">Docs</a></li>
    <!-- etc. -->
  <ul>
</nav>

And in those cases, I’m sure you’ve been asked to display those links side-by-side rather than allowing them to stack vertically as an unordered list is naturally wont to do. Some of us who have been writing styles for some years may have muscle memory for changing the display of those list items from default block-level elements into inline elements while preserving the box model properties of block elements:

/* Traditional */
li {
  display: inline-block;
}

You’re going to need space between those list items. After all, they no longer take up the full available width of their parent since inline-block elements are only as wide as the content they contain, plus whatever borders, padding, margin, and offsets we add. Traditionally, that meant reaching for margin as we do for centering, but only the constituent margin property that applies the margin in the inline direction we want, whether that is margin-left/margin-inline-start or margin-right/margin-inline-end.

Let’s assume we’re working with logical properties and want a margin at the end of the list of items in the inline direction:

/* Traditional */
li {
  display: inline-block;
  margin-inline-end: 1rem;
}

But wait! Now we have margin on all of the list items. There’s really no need for a margin on the last list item because, well, there are no other items after it.

That may be cool in the vast majority of situations, but it leaves the layout susceptible. What if, later, we decide to display another element next to the <nav>? Suddenly, we’re dealing with superfluous spacing that might affect how we decide to style that new element. It’s a form of technical debt.

It would be better to clean that up and tackle spacing for reals without that worry. We could reach for a more modern feature like the :not() pseudo-class. That way, we can exclude the last list item from participating in the margin party.

/* A little more modern */
li {
  display: inline-block;
}
li:not(:last-of-type) {
  margin-inline-end: 1rem;
}

Even easier? Even more modern? We could reach for the margin-trim property, which, when applied to the parent element, chops off superfluous spacing like a good haircut, effectively collapsing margins that prevent the child elements from sitting flush with the parent’s edges.

/* Easier, more modern */
ul {
  margin-trim: inline-end;
}

li {
  display: inline-block;
  margin-inline-end: 1rem;
}

Before any pitchforks are raised, let’s note that margin-trim is experimental and only supported by Safari at the time I’m writing this. So, yes, this is bleeding-edge modern stuff and not exactly the sort of thing you want to ship straight to production. Just because something is “modern” doesn’t mean it’s the right tool for the job!

In fact, there’s probably an even better solution without all the caveats, and it’s been sitting right under our noses: Flexbox. Turning the unordered list into a flexible container overrides the default block-level flow of the list items without changing their display, giving us the side-by-side layout we want. Plus, we gain access to the gap property, which you might think of as margin with margin-trim built right in because it only applies space between the children rather than all sides of them.

/* Less modern, but even easier! */
ul {
  display: flex;
  gap: 1rem;
}

This is what I love about CSS. It’s poetic in the sense that there are many ways to say the same thing — some are more elegant than others — but the “best” approach is the one that fits your thinking model. Don’t let anyone tell you you’re wrong if the output is what you’re expecting.

Just because we’re on the topic of styling lists that don’t look like lists, it’s worth noting that the common task of removing list styles on both ordered and unordered lists (list-style-type: none) has a side effect in Safari that strips the list items of its default accessible role. One way to “fix” it (if you consider it a breaking change) is to add the role back in HTML a là <ul role="list>. Manuel Matuzović has another approach that allows us to stay in CSS by removing the list style type with a value of empty quotes:

ul {
  list-style-type: "";
}

I appreciate that Manuel not only shared the idea but has provided the results of light testing as well while noting that more testing might be needed to ensure it doesn’t introduce other latent consequences.

Maintaining Proportions

There’s no need to dwell on this one. We used to have very few options for maintaining an element’s physical proportions. For example, if you want a perfect square, you could rely on fixed pixel units explicitly declared on the element’s width and height:

/* Traditional */
height: 500px;
width: 500px;

Or, perhaps you need the element’s size to flex a bit, so you prefer relative units. In that case, something like percentages is difficult because a value like 50% is relative to the size of the element’s parent container rather than the element itself. The parent element then needs fixed dimensions or something else that’s completely predictable. It’s almost an infinite loop of trying to maintain the 1:1 proportion of one element by setting the proportion of another containing element.

The so-called “Padding Hack” sure was a clever workaround and not really much of a “hack” as much as a display of masterclass-level command of the CSS Box Model. Its origins date back to 2009, but Chris Coyier explained it nicely in 2017:

“If we force the height of the element to zero (height: 0;) and don’t have any borders, then the padding will be the only part of the box model affecting the height, and we’ll have our square.”

— Chris Coyier

Anyway, it took a lot of ingenious CSS to pull it off. Let’s hear it for the CSS Working Group, which came up with a much more elegant solution: an aspect-ratio property.

/* Easier! */
aspect-ratio: 1;
width: 50%;

Now, we have a perfect square no matter how the element’s width responds to its surroundings, providing us with an easier and more efficient ruleset that’s more resilient to change. I often find myself using aspect-ratio in place of an explicit height or width in my styles these days.

Card Hover Effects

Not really CSS-specific, but styling a hover effect on a card has traditionally been a convoluted process where we wrap the element in an <a> and hook into it to style the card accordingly on hover. But with :has() — now supported in all major browsers as of Firefox 121! — we can put the link in the card as a child how it should be and style the card as a parent element when it *has* hover.

.card:has(:hover, :focus) {
  /* Style away! */
}

That’s way super cool, awesome, and easier to read than, say:

a.card-link:hover > .card {
  /* Style what?! */
}
Creating And Maintaining Color Palettes

A long, long time ago, I shared how I name color variables in my Sass files. The point is that I defined variables with hexadecimal values, sort of like this in a more modern context using CSS variables instead of Sass:

/* Traditional */
:root {
  --black: #000;
  --gray-dark: #333;
  --gray-medium: #777;
  --gray-light: #ccc;
  --gray-lighter: #eaeaea;
  --white: #fff;
}

There’s nothing inherently wrong with this. Define colors how you want! But notice that what I was doing up there was manually setting a range of grayscale colors and doing so with inflexible color values. As you might have guessed by this point, there is a more efficient way to set this up so that it is much more maintainable and even easier to read.

/* Easier to maintain! */
:root {
  --primary-color: #000;
  --gray-dark: color-mix(in srgb, var(--primary-color), #fff 25%);
  --gray-medium: color-mix(in srgb, var(--primary-color), #fff 40%);
  --gray-light: color-mix(in srgb, var(--primary-color), #fff 60%);
  --gray-lighter: color-mix(in srgb, var(--primary-color), #fff 75%);
}

Those aren’t exactly 1:1 conversions. I’m too lazy to do it for real, but you get the idea, right? Right?! The “easier” way may *look* more complicated, but if you want to change the main color, update the --primary-color variable and call it a day.

Perhaps a better approach would be to change the name --primary-color to --grayscale-palette-base. This way, we can use the same sort of approach across many other color scales for a robust color system.

/* Easier to maintain! */
:root {
  /* Baseline Palette */
  --black: hsl(0 0% 0%);
  --white: hsl(0 0% 100%);
  --red: hsl(11 100% 55%);
  --orange: hsl(27 100% 49%);
  /* etc. */

  /* Grayscale Palette */
  --grayscale-base: var(--black);
  --grayscale-mix: var(--white);

  --gray-100: color-mix(in srgb, var(--grayscale-base), var(--grayscale-mix) 75%);
  --gray-200: color-mix(in srgb, var(--grayscale-base), var(--grayscale-mix) 60%);
  --gray-300: color-mix(in srgb, var(--grayscale-base), var(--grayscale-mix) 40%);
  --gray-400: color-mix(in srgb, var(--grayscale-base), var(--grayscale-mix) 25%);

  /* Red Palette */
  --red-base: var(--red);
  --red-mix: var(--white);

  --red-100: color-mix(in srgb, var(--red-base), var(--red-mix) 75%);
  /* etc. */

  /* Repeat as needed */
}

Managing color systems is a science unto itself, so please don’t take any of this as a prescription for how it’s done. The point is that we have easier ways to approach them these days, whereas we were forced to reach for non-CSS tooling to even get access to variables.

Managing Line Lengths

Two things that are pretty new to CSS that I’m absolutely loving:

  • Character length units (ch);
  • text-wrap: balance.

As far as the former goes, I love it for establishing the maximum width of containers, particularly those meant to hold long-form content. Conventional wisdom tells us that an ideal length for a line of text is somewhere between 50-75 characters per line, depending on your source. In a world where font sizing can adapt to the container size or the viewport size, predicting how many characters will wind up on a line is a guessing game with a moving target. But if we set the container to a maximum width that never exceeds 75 characters via the ch unit and a minimum width that fills most, if not all, of the containing width in smaller contexts, that’s no longer an issue, and we can ensure a comfortable reading space at any breakpoint — without media, to boot.

article {
  width: min(100%, 75ch);
}

Same sort of thing with headings. We don’t always have the information we need — font size, container size, writing mode, and so on — to produce a well-balanced heading. But you know who does? The browser! Using the new text-wrap: balance value lets the browser decide when to wrap text in a way that prevents orphaned words or grossly unbalanced line lengths in a multi-line heading. This is another one of those cases where we’re waiting on complete browser support (Safari, in this instance). Still, it’s also one of those things I’m comfortable dropping into production now as a progressive enhancement since there’s no negative consequence with or without it.

A word of caution, however, for those of you who may be tempted to apply this in a heavy-handed way across the board for all text:

/* 👎 */
* {
  text-wrap: balance;
}

Not only is that an un-performant decision, but the balance value is specced in a way that ignores any text that is longer than ten lines. The exact algorithm, according to the spec, is up to the user agent and could be treated as the auto value if the maximum number of lines is exceeded.

/* 👍 */
article:is(h1, h2, h3, h4, h5, h6) {
  text-wrap: balance;
}

text-wrap: pretty is another one in experimentation at the moment. It sounds like it’s similar to balance but in a way that allows the browser to sacrifice some performance gains for layout considerations. However, I have not played with it, and support for it is even more limited than balance.

How About You?

These are merely the things that CSS offers here in late 2023 that I feel are having the most influence on how I write styles today versus how I may have approached similar situations back in the day when, during writing, I had to walk uphill both ways to produce a stylesheet.

I can think of other features that I’ve used but haven’t fully adopted in my toolset. Those would include things like the following:

What say you? I know there was a period of time when some of us were openly questioning whether there’s “too much” CSS these days and opining that the learning curve for getting into CSS is becoming a difficult barrier to entry for new front-enders. What new features are you finding yourself using, and are they helping you write CSS in new and different ways that make your code easier to read and maintain or perhaps “re-learning” how you think about styles?

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[An Introduction To Full Stack Composability]]> https://smashingmagazine.com/2023/11/introduction-full-stack-composability/ https://smashingmagazine.com/2023/11/introduction-full-stack-composability/ Thu, 23 Nov 2023 20:00:00 GMT This article is a sponsored by Storyblok

Composability is not only about building a design system. We should create and manage reusable components in the frontend and the UX of our website, as well as coordinate with the backend and the content itself.

In this article, we’ll discuss how to go through both the server and the client sides of our projects and how to align with the content that we’ll manage. We’ll also compare how to implement composable logic into our code using different approaches provided by React-based frameworks like Remix and Next.js.

Composable Architecture

In the dynamic landscape of web development, the concept of composability has emerged as a key player in crafting scalable, maintainable, and efficient systems. It goes beyond merely constructing design systems; it encompasses the creation and management of reusable components across the entire spectrum of web development, from frontend UX to backend coordination and content management.

Composability is the art of building systems in a modular and flexible way. It emphasizes creating components that are not only reusable but can seamlessly fit together, forming a cohesive and adaptable architecture. This approach not only enhances the development process but also promotes consistency and scalability across projects.

We define “composable architecture” as the idea of building software systems from small, independent components that you can combine to form a complete system. Think of a system as a set of LEGO pieces. Putting them together, you can build cool structures, figures, and other creations. But the cool thing about those blocks is that you can exchange them, reuse them for other creations, and add new pieces to your existing models.

Parts Of A Composable Architecture

To manage a composable architecture for our projects, we have to connect two parts:

Modular Components

Break down the system into independent, self-contained modules or components. Modular components can be developed, tested, and updated independently, promoting reusability and easier maintenance.

When we talk about modular components, we are referring to units like:

  • Microservices
    The architectural style for developing software applications as a set of small, independent services that communicate with each other through well-defined APIs. The application is broken down into a collection of loosely coupled and independently deployable services, each responsible for a specific business capability.
  • Headless Applications
    In a headless architecture, the application’s logic and functionality are decoupled from the presentation layer, allowing it to function independently of a specific user interface.
  • Packaged Business Capabilities (PBC)
    A set of activities, products, and services bundled together and offered as a complete solution. It is a very common concept in the e-commerce environment.

APIs

As the components of our architecture can manage different types of data, processes, and tasks of different natures, they need a common language to communicate between them. Components should expose consistent and well-documented APIs (Application Programming Interfaces). An API is a set of rules and protocols that allows one software application to interact with another. APIs define the methods and data formats that applications can use to communicate with each other.

Benefits Of A Composable Architecture

When applying a composable approach to the architecture of our projects, we will see some benefits and advantages:

  • Easy to reuse.
    Components are designed to be modular and independent. This makes it easy to reuse them in different parts of the system or entirely different systems. Reusability can significantly reduce development time and effort, as well as improve consistency across different projects.
  • Easy to scale.
    When the demand for a particular service or functionality increases, you can scale the system by adding more instances of the relevant components without affecting the entire architecture. This scalability is essential for handling growing workloads and adapting to changing business requirements.
  • Easy to maintain.
    Each component is self-contained. If there’s a need to update or fix a specific feature, it can be done without affecting the entire system. This modularity makes it easier to identify, isolate, and address issues, reducing the impact of maintenance activities on the overall system.
  • Independence from vendors.
    This reduces the dependence on specific vendors for components, making it easier to switch or upgrade individual parts without disrupting the entire system.
  • Faster development and iteration.
    Development teams can work on different components concurrently. This parallel development accelerates the overall development process. Additionally, updates and improvements can be rolled out independently.
The MACH Architecture

An example of composability is what is called the MACH architecture. The MACH acronym breaks down into Microservices, API-first, Cloud-native, and Headless. This approach is focused on applying composability in a way that allows you to mold the entire ecosystem of your projects and organization to make it align with business needs.

One of the main ideas of the MACH approach is to let marketers, designers, and front-end developers do their thing without having to worry about the backend in the process. They can tweak the look and feel on the fly, run tests, and adapt to what customers want without slowing down the whole operation.

With MACH architecture, getting to an MVP (minimum viable product) is like a rocket ride. Developers can whip up quick prototypes, and businesses can test out their big ideas before going all-in.

The advantage of Storyblok, speaking about composability, is the component approach it uses to manage the content structures. Because of its Headless nature, Storyblok allows you to use the created components (or “blocks”, as they are called on the platform) with any technology or framework. Linking to the previous topic, you can create component structures to manage content in Storyblok while managing their visual representation with React components on the client-side (or server-side) of your application.

Conclusion

Composability is a powerful paradigm that transforms the way we approach web development. By fostering modularity, reusability, and adaptability, a composable architecture lays the groundwork for building robust and scalable web applications. As we navigate through both server and client sides, aligning with content management, developers can harness the full potential of composability in order to create a cohesive and efficient web development ecosystem.

]]>
hello@smashingmagazine.com (Facundo Giuliani)
<![CDATA[CSS Responsive Multi-Line Ribbon Shapes (Part 2)]]> https://smashingmagazine.com/2023/11/css-responsive-multi-line-ribbon-shapes-part2/ https://smashingmagazine.com/2023/11/css-responsive-multi-line-ribbon-shapes-part2/ Wed, 22 Nov 2023 10:00:00 GMT In my previous article, we tackled ribbons in CSS. The idea was to create a classic ribbon pattern using a single element and values that allow it to adapt to however much content it contains. We established a shape with repeating CSS gradients and tailor-cut the ribbon’s ends with clip-path() to complete the pattern, then used it and wound up with two ribbon variations: one that stacks vertically with straight strands of ribbons and another that tweaks the shape by introducing pseudo-elements.

If you are wondering why I am using 80%, then there is no particular logic to my approach. It’s because I found that covering more space with the color and leaving less space between lines produces a better result for my eye. I could have assigned variables to control the space without touching the core code, but there’s already more than enough complexity going on. So, that’s the reasoning behind the hard-coded value.

Styling The First Ribbon

We’ll start with the red ribbon from the demo. This is what we’re attempting to create:

It may look complex, but we will break it down into a combination of basic shapes.

Stacking Gradients

Let’s start with the gradient configuration, and below is the result we are aiming for. I am adding a bit of transparency to better see both gradients.

h1 {
  --c: #d81a14;

  padding-inline: .8lh;
  background:
    /* Gradient 1 */
    linear-gradient(var(--c) 80%, #0000 0) 
      0 .1lh / 100% 1lh,
    /* Gradient 2 */
    linear-gradient(90deg, color-mix(in srgb, var(--c), #000 35%) 1.6lh, #0000 0) 
      -.8lh 50% / 100% calc(100% - .3lh) repeat-x;
}

We already know all about the first gradient because we set it up in the last section. The second gradient, however, is placed behind the first one to simulate the folded part. It uses the same color variable as the first gradient, but it’s blended with black (#000) in the color-mix() function to darken it a smidge and create depth in the folds.

The thing with the second gradient is that we do not want it to reach the top and bottom of the element, which is why its height is equal to calc(100% - .3lh).

Note the use of padding in the inline direction, which is required to avoid text running into the ribbon’s folds.

Masking The Folded Parts

Now, it’s time to introduce a CSS mask. If you look closely at the design of the ribbon, you will notice that we are cutting triangular shapes from the sides.

We have applied a triangular shape on the left and right sides of the ribbon. Unlike the backgrounds, they repeat every two lines, giving us the complex repetition we want.

Imagine for a moment that those parts are transparent.

That will give us the final shape! We can do it with masks, but this time, let’s try using conic-gradient(), which is nice because it allows us to create triangular shapes. And since there’s one shape on each side, we’ll use two conical gradients — one for the left and one for the right — and repeat them in the vertical direction.


mask:
  conic-gradient(from 225deg at .9lh, #0000 25%, #000 0) 
    0 1lh / 50% 2lh repeat-y,
  conic-gradient(from 45deg at calc(100% - .9lh), #0000 25%, #000 0) 
    100% 0 / 50% 2lh repeat-y;

Each gradient covers half the width (50%) and takes up two lines of text (2lh). Also, note the 1lh offset of the first gradient, which is what allows us to alternate between the two as the ribbon adapts in size. It’s pretty much a zig-zag pattern and, guess what, I have an article that covers how to create zig-zag shapes with CSS masks. I highly recommend reading that for more context and practice applying masks with conical gradients.

Masking The Ribbon’s Ends

We are almost done! All we are missing are the ribbon’s cut edges. This is what we have so far:

We can fill that in by adding a third gradient to the mask:

mask:
  /* New gradient */
  linear-gradient(45deg, #000 50%, #0000 0) 100% .1lh / .8lh .8lh no-repeat,

  conic-gradient(from 225deg at .9lh, #0000 25%, #000 0) 
   0 1lh / 50% 2lh repeat-y,
  conic-gradient(from 45deg  at calc(100% - .9lh), #0000 25%, #000 0) 
   100% 0 / 50% 2lh repeat-y;

That linear gradient will give us the missing part at the top, but we still need to do the same at the bottom, and here, it’s a bit tricky because, unlike the top part, the bottom is not static. The cutout can be either on the left or the right based on the number of lines of text we’re working with:

We will fill in those missing parts with two more gradients. Below is a demo where I use different colors for the newly added gradients to see exactly what’s happening. Use the resize handle to see how the ribbon adjusts when the number of lines changes.

Styling The Second Ribbon

The second ribbon from the demo — the green one — is a variation of the first ribbon.

I am going a little bit faster this time around. We’re working with many of the same ideas and concepts, but you will see how relatively easy it is to create variations with this approach.

The first thing to do is to add some space on the top and bottom for the cutout part. I’m applying a transparent border for this. The thickness needs to be equal to half the height of one line (.5lh).

h1 {
  --c: #d81a14;

  border-block: .5lh solid #0000;
  padding-inline: 1lh;
  background: linear-gradient(var(--c) 80%, #0000 0) 0 .1lh / 100% 1lh padding-box;
}

Note how the background gradient is set to cover only the padding area using padding-box.

Now, unlike the first ribbon, we are going to add two more gradients for the vertical pieces that create the folded darker areas.

h1 {
  --c: #d81a14;

  border-block: .5lh solid #0000;
  padding-inline: 1lh;
  background:
    /* Gradient 1 */
    linear-gradient(var(--c) 80%, #0000 0) 0 .1lh / 100% 1lh padding-box,
    /* Gradient 2 */
    linear-gradient(#0000 50%, color-mix(in srgb, var(--c), #000 35%) 0) 
     0 0 / .8lh 2lh repeat-y border-box,
    /* Gradient 3 */
    linear-gradient(color-mix(in srgb, var(--c), #000 35%) 50%, #0000 0) 
     100% 0 / .8lh 2lh repeat-y border-box;
}

Notice how the last two gradients are set to cover the entire area with a border-box. The height of each gradient needs to equal two lines of text (2lh), while the width should be consistent with the height of each horizontal gradient. With this, we establish the folded parts of the ribbon and also prepare the code for creating the triangular cuts at the start and end of the ribbon.

Here is an interactive demo where you can resize the container to see how the gradient responds to the number of lines of text.

Applying only the conic gradients will also hide the cutout part, so I have to introduce a third gradient to make sure they remain visible:

mask:
  /* New Gradient */
  linear-gradient(#000 1lh, #0000 0) 0 -.5lh,
  /* Left Side */
  conic-gradient(from 225deg at .9lh, #0000 25%, #000 0) 
   0 1lh / 51% 2lh repeat-y padding-box,
  /* Right Side */
  conic-gradient(from 45deg at calc(100% - .9lh), #0000 25%, #000 0) 
   100% 0 / 51% 2lh repeat-y padding-box;

And the final touch is to use clip-path for the cutouts at the ends of the ribbon.

Notice how the clip-path is cutting two triangular portions from the bottom to make sure the cutout is always visible whether we have an odd or even number of lines.

This is how the final code looks when we put everything together:

h1 {
  --c: #d81a14;

  padding-inline: 1lh;
  border-block: .5lh solid #0000;
  background: 
    linear-gradient(var(--c) 80%, #0000 0)
      0 .1lh / 100% 1lh padding-box,
    linear-gradient(#0000 50%, color-mix(in srgb,var(--c), #000 35%) 0)
      0 0 / .8lh 2lh repeat-y border-box,
    linear-gradient(color-mix(in srgb, var(--c), #000 35%) 50%, #0000 0)
      100% 0 / .8lh 2lh repeat-y border-box;
  mask:
    linear-gradient(#000 1lh, #0000 0) 0 -.5lh,
    conic-gradient(from 225deg at .9lh,#0000 25%,#000 0)
     0 1lh/51% 2lh repeat-y padding-box,
    conic-gradient(from 45deg at calc(100% - .9lh), #0000 25%, #000 0)
     100% 0 / 51% 2lh repeat-y padding-box;
  clip-path: polygon(0 0, calc(100% - .8lh) 0,
    calc(100% - .4lh) .3lh,
    100% 0, 100% 100%,
    calc(100% - .4lh) calc(100% - .3lh),
    calc(100% - .8lh) 100%, .8lh 100%, .4lh calc(100% - .3lh), 0 100%);
}

I challenged you to find a way to reverse the direction of the first ribbon by adjusting the gradient values. Try to do the same thing here!

It may sound difficult. If you need a lifeline, you can get the code from my online collection, but it’s the perfect exercise to understand what we are doing. Explaining things is good, but nothing beats practicing.

The Final Demo

Here is the demo once again to see how everything comes together.

See the Pen Responsive multi-line ribbon shapes by Temani Afif.

Wrapping Up

There we go, two more ribbons that build off of the ones we created together in the first article of this brief two-part series. If there’s only one thing you take away from these articles, I hope it’s that modern CSS provides us with powerful tools that offer different, more robust approaches to things we used to do a long time ago. Ribbons are an excellent example of a long-living design pattern that’s been around long enough to demonstrate how creating them has evolved over time as new CSS features are released.

I can tell you that the two ribbons we created in this article are perhaps the most difficult shapes in my collection of ribbon shapes. But if you can wrap your head around the use of gradients — not only for backgrounds but masks and clipping paths as well — you’ll find that you can create every other ribbon in the collection without looking at my code. It’s getting over that initial hurdle that makes this sort of thing challenging.

You now have the tools to make your own ribbon patterns, too, so why not give it a try? If you do, please share them in the comments so I can see your work!

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Temani Afif)
<![CDATA[Creating And Maintaining A Voice Of Customer Program]]> https://smashingmagazine.com/2023/11/creating-maintaining-voice-customer-program/ https://smashingmagazine.com/2023/11/creating-maintaining-voice-customer-program/ Tue, 21 Nov 2023 12:00:00 GMT For those involved in digital and physical product development or leadership, consider a Voice of Customer (VoC) program. A VoC program systematically gathers and analyzes customer insights, channeling user opinions into actionable intelligence. VoC programs use surveys, analytics, interviews, and more to capture a broad range of customer sentiment. When implemented effectively, a VoC program transforms raw feedback into a roadmap for strategic decisions, product refinement, and service enhancements.

By proactively identifying issues, optimizing offerings for user satisfaction, and tailoring products to real-time demand, VoC programs keep companies ahead. Moreover, in a world of fleeting consumer loyalty, such programs build trust and enhance the overall brand experience. VoC has been a standard CX practice that UX and product teams can utilize to their advantage. We’ll focus on VoC for digital products for this article. However, the methods and lessons learned are equally applicable to those working with physical products.

Successful product teams and User Experience (UX) practitioners understand that customer feedback is invaluable. It guides decisions and fosters innovation for products and services. Whether it’s e-commerce platforms refining user interfaces based on shopper insights or social media giants adjusting algorithms in response to user sentiments, customer feedback is pivotal for digital success. Listening, understanding, and adapting to the customer’s voice are key to sustainable growth.

The role of UX research in capturing the Voice of the Customer

UX research serves as the bridge that spans the chasm between a company’s offerings and its customers’ perspectives. UX research plays a pivotal role in capturing the multifaceted VoC. Trained UX researchers transform raw feedback into actionable recommendations, guiding product development and design in a direction that resonates authentically with users.

Ultimately, UX research is the translator that converts the diverse, nuanced VoC into a coherent and actionable strategy for digital companies.

Setting Up A Voice Of Customer Program

Overview Of Steps

We’ve identified six key steps needed to establish a VoC program. At a high level, these steps are the following:

  1. Establishing program objectives and goals.
  2. Identifying the target audience and customer segments.
  3. Selecting the right research methods and tools.
  4. Developing a data collection framework.
  5. Analyzing and interpreting customer feedback.
  6. Communicating insights to stakeholders effectively.

We’ll discuss each of these steps in more detail below.

Establishing Program Objectives And Goals

Before establishing a VoC program, it’s crucial to define clear objectives and goals. Are you aiming to enhance product usability, gather insights for new feature development, or address customer service challenges? By outlining these goals, you create a roadmap that guides the entire program. You will also avoid taking on too much and maintain a focus on what is critical when you state your specific goals and objectives. Specific objectives help shape research questions, select appropriate methodologies, and ensure that the insights collected align with the strategic priorities of the company.

You should involve a diverse group of stakeholders in establishing your goals. You might have members of your product teams and leadership respond to a survey to help quantify what your team and company hope to get out of a VoC. You might also hold workshops to help gain insight into what your stakeholders consider critical for the success of your VoC. Workshops can help you identify how stakeholders might be able to assist in establishing and maintaining the VoC and create greater buy-in for the VoC from your stakeholders. People like to participate when it comes to having a say in how data will be collected and used to inform decisions. If you come up with a long list of goals that seem overwhelming, you can engage key stakeholders in a prioritization exercise to help determine which goals should be the VoC focus.

Identifying The Target Audience And Customer Segments

Once you create clear objectives and goals, defining the target audience and customer segments will be important. For example, you decide your objective is to understand conversion rates between your various customer segments. Your goal is to increase sign-up conversion. You would want to determine if your target audience should be people who have purchased within a certain time frame, people who have never made a purchase, people who have abandoned carts, or a mix of all three.

Analytics can be critical to help create shortcuts at this point. You might start by looking at analytical data collected on the sign-up page to identify age gaps to set the target audience to understand why that specific age gap(s) are not signing up, whether there is evidence certain segments are more likely to abandon carts, and which segments are less likely to visit your site at all. Then, based on these clear objectives and goals, as well as identifying a target audience and customer segment, you could select the right research method and tools to collect data from the audience segment(s) you’ve identified as critical to collect feedback from.

Selecting The Right Research Methods And Tools

The success of a VoC program hinges on the selection of appropriate research methods and tools. Depending on your objectives, you might employ a mix of quantitative methods like surveys and analytics to capture broad trends, along with qualitative methods like user interviews and usability testing to unearth nuanced insights. Utilizing digital tools and platforms can streamline data collection, aggregation, and analysis. These tools, ranging from survey platforms to sentiment analysis software, enhance efficiency and provide in-depth insights.

The key is to choose methods and tools that align with the program’s goals and allow for a holistic understanding of the customer’s voice.

Your UX researcher will be critical in helping to identify the correct methods and tools for collecting data.

For example, a company could be interested in measuring satisfaction with its current digital experience. If there are currently no metrics being captured by the company, then a mixed method approach could be used to try to understand customers’ current attitudes towards the digital experience at a large scale and then dive deeper at a smaller scale after analyzing the survey. The quantitative survey could contain traditional metrics to measure people’s feelings like Net Promoter Score (NPS), which attempts to measure customer loyalty using a single item and/or System Usability Scale (SUS), which attempts to measure system usability using a brief questionnaire, and then based on the data collected, would drive the types of questions asked in a qualitative interview.

To collect the survey information, an online survey tool could be used that can draft and calculate metric questions for you. Many tools have integrated analysis that allows users to do statistical analysis of quantitative data collected and light semantic reviews on qualitative data. You can share the survey data easily with your stakeholder groups and then shape an interview protocol that will allow you to reach out to a smaller group of users to get deeper insight into the findings from the survey.

Table 1: Commonly used UX research methods to consider as part of a VOC Program
UX Research Method Situations in which to use Type of data collected
User interviews
  • Gaining an in-depth understanding of user needs, motivations, and behaviors.
  • Uncovering hidden pain points and frustrations.
  • Generating new ideas and solutions.
Qualitative data (e.g., quotes, stories, opinions)
Surveys
  • Gathering quantitative data from a large number of users.
  • Measuring user satisfaction and attitudes.
  • Identifying trends and patterns.
Quantitative data (e.g., ratings, rankings, frequencies)
Focus groups
  • Generating a wide range of perspectives on a topic.
  • Exploring controversial or sensitive issues.
  • Gathering feedback on design concepts or prototypes.
Qualitative data (e.g., group discussions, consensus statements)
Usability testing
  • Identifying usability problems with a product or service.
  • Evaluating the effectiveness of design solutions.
  • Gathering feedback on user flows and task completion.
Qualitative and quantitative data (e.g., task completion rates, error rates, user feedback)
Analytics
  • Tracking user behavior on a website or app.
  • Identifying trends and patterns in user engagement.
  • Measuring the effectiveness of marketing campaigns.
Quantitative data (e.g., page views, time on site, conversion rates)

Developing A Data Collection Framework

Collecting feedback requires a structured approach to ensure consistency and reliability. Developing a data collection framework involves creating standardized surveys, questionnaires, and interview protocols that gather relevant information systematically. A well-designed framework ensures you capture essential data points while minimizing biases or leading questions. This framework becomes the backbone of data collection efforts, enabling robust analysis and comparison of feedback across various touchpoints and customer segments.

Your data collection framework should include the following:

  • Objectives and research questions.
  • Data sources, whether it’s surveys, user interviews, website analytics, or any other relevant means.
  • Data collection methods with an emphasis on reliability and validity.
  • A robust data management plan. This includes organizing data in a structured format, setting up appropriate storage systems, and ensuring data security and privacy compliance, especially if dealing with sensitive information.
  • Timing and frequency of data collection, as well as the duration of your study. A well-thought-out schedule ensures you gather data when it’s most relevant and over a suitable time frame.
  • A detailed data analysis plan that outlines how you will process, analyze, and draw insights from the collected data.

Analyzing And Interpreting Customer Feedback

Collecting data is only half the journey; the real value lies in analyzing and interpreting the data collected. This involves processing both quantitative data (such as survey responses) and qualitative data (such as open-ended comments). Data analysis techniques like sentiment analysis, thematic coding, and pattern recognition help distill valuable insights.

These insights unveil customer preferences, emerging trends, and pain points that might require attention. Your UX researcher(s) can take the lead, with assistance from other team members, in helping to analyze your data and interpret your findings. The interpretation phase transforms raw data into actionable recommendations, guiding decision-making for product improvements and strategic initiatives.

Communicating Insights To Stakeholders Effectively

The insights derived from a VoC program hold significance across various levels of the organization. Effectively communicating these insights to stakeholders is critical for driving change and garnering support. Presenting findings through clear, visually engaging reports and presentations helps stakeholders grasp the significance of customer feedback. Additionally, highlighting actionable recommendations and illustrating how they tie back to strategic objectives empowers decision-makers to make informed choices. Regularly updating stakeholders on progress, outcomes, and improvements reinforces the ongoing value of the VoC program and fosters a culture of customer-centricity within the organization.

Key Components Of A Successful Voice Of Customer Program

Building A Culture Of Feedback Within The Organization

A successful VoC program is rooted in an organizational culture that prioritizes feedback at all levels. This culture begins with leadership setting the example by actively seeking and valuing customer opinions. When employees perceive that feedback is not only encouraged but also acted upon, it fosters an environment of collaboration and innovation. This culture should extend across departments, from marketing to development to customer service, ensuring that every team member understands the role they play in delivering exceptional experiences. By integrating customer insights into the company’s DNA, a feedback culture reinforces the notion that everyone has a stake in the customer’s journey.

Start small and incorporate research activities into product development to start harnessing a user-centric approach. Develop reports that showcase the business purpose, findings, and recommendations that can be presented to the product development team and stakeholders, but also to other departments to show the value of VoC research. Lastly, provide opportunities to collaborate with other departments to help them incorporate VoC into their daily activities. As a result, a culture of incorporating a VoC program becomes reinforced.

There are many ways you can go about building this culture. Some specific examples we’ve used include facilitating cross-product or cross-discipline meetings to plan research and review findings, workshops bringing together stakeholders from various lines of business or roles to help shape the research agenda, and perhaps most importantly, identifying and utilizing a champion of insights to promote findings throughout the organization. Ideally, your champion would hold a position that allows them to have exposure horizontally across your business and vertically up to various key stakeholders and members of leadership. Your champion can help identify who should be attending meetings, and they can also be utilized to present findings or have one-off conversations with leadership to promote buy-in for your culture of feedback.

Implementing User-friendly Feedback Mechanisms

For a VoC program to thrive, feedback mechanisms must be accessible, intuitive, and seamless for customers. Whether it’s a user-friendly feedback form embedded within an app, a chatbot for instant assistance, or social media channels for open conversations, the channels for providing feedback should reflect the digital preferences of your audience. These mechanisms should accommodate both quantitative and qualitative inputs, enabling customers to share their experiences in a manner that suits them best. A key element here is the simplicity of the process; if users find it cumbersome or time-consuming to provide feedback, the program’s effectiveness can be compromised.

Encouraging Customer Participation And Engagement

Engaging customers is essential for gathering diverse perspectives. Incentivizing participation through rewards, gamification, or exclusive offers can increase engagement rates. Moreover, companies can foster a sense of ownership among customers by involving them in shaping future offerings. Beta testing, user panels, and co-creation sessions invite customers to actively contribute to product development, reinforcing the idea that their opinions are not only valued but directly influence the company’s direction. By making customers feel like valued collaborators, a VoC program becomes a mutually beneficial relationship.

Integrating Feedback Into The Decision-making Process

Customer feedback should not remain isolated; it needs to permeate the decision-making process across all departments. This integration demands that insights gathered through the VoC program are systematically channeled to relevant teams. Product teams can use these insights to refine features, marketers can tailor campaigns based on customer preferences, and support teams can address recurring pain points promptly. Creating feedback loops ensures that customer opinions are not only heard but also translated into tangible actions, demonstrating the organization’s commitment to iterative improvement driven by user insights.

Continuous Improvement And Iteration Of The VoC Program

A VoC program is a journey, not a destination. It requires a commitment to continuous improvement and adaptation. As customer behaviors and preferences evolve, the program must evolve in tandem. Regularly reviewing the program’s effectiveness, incorporating new data sources, and updating methodologies keep the program relevant. This also includes analyzing the program’s impact on KPIs such as customer satisfaction scores, retention rates, and revenue growth. By iterating the program itself, businesses ensure that it remains aligned with changing business goals and the ever-evolving needs of their customers.

Best Practices And Tips For An Effective VoC Program

Creating Clear And Concise Surveys And Questionnaires

The success of a VoC program often hinges on the quality of the surveys and questionnaires used to collect feedback. To ensure meaningful responses, it’s essential to design clear and concise questions that avoid ambiguity. Keep the surveys focused on specific topics to prevent respondent fatigue and make sure that the language used is easily understandable by your target audience. Utilize a mix of closed-ended (quantitative) and open-ended (qualitative) questions to capture both statistical data and rich, contextual insights. Prioritize brevity and relevance to encourage higher response rates and more accurate feedback.

Monitoring Feedback Across Multiple Channels

Customer feedback is shared through diverse channels: social media, email, app reviews, support tickets, and more. Monitoring feedback across these channels is essential for capturing a holistic view of customer sentiment. Centralize these feedback streams to ensure that no valuable insights slip through the cracks. By aggregating feedback from various sources, you can identify recurring themes and uncover emerging issues, allowing for proactive responses and continuous improvement. Note we have focused on digital products. However, if there is a physical component of your experience, such as a brick-and-mortar store, you should be collecting similar feedback from those customers in those settings.

Incorporating User Testing And Usability Studies

Incorporating user testing and usability studies is important to help evaluate an experience with users. While upfront activities like in-depth user interviews can articulate users’ desires and needs for an experience, they do not help evaluate the updated experience. Findings and recommendations from user testing and usability studies should be incorporated into development sprints or backlogs. This will ensure that the experience consistently considers and reflects the VoC.

Ensuring Privacy And Data Security In The VoC Program

As you talk to users and develop your VoC program, you will constantly be collecting data. The data that is shared in reports should always be anonymous. Additionally, creating documentation on how to collect consent and data policies will be very important. If data is not stored properly, you could face penalties and lose the trust of participants for future VoC activities.

Challenges Of Starting A Voice Of Customer Program

If you are committed to starting a VoC program from scratch and then maintaining that program, you are likely to encounter many challenges. Gaining buy-in and commitment from stakeholders is a challenge for anyone looking to establish a VoC program. You’ll need to commit to a concerted effort across various departments within an organization. Securing buy-in and commitment from key stakeholders, such as executives, managers, and employees, is crucial for its success. Without their support, the program may struggle to gain traction and achieve its goals.

Resources are always an issue, so you’ll need to work on securing adequate funding for the program. Establishing and maintaining a VoC program can be a costly endeavor. This includes the cost of software, training, and staff time. Organizations must be prepared to allocate the necessary resources to ensure the success of the program.

Allocating sufficient time and resources to collect, analyze, and act on feedback: collecting, analyzing, and acting on customer feedback can be a time-consuming process. Organizations must ensure that they have the necessary staff and resources in place to dedicate to the VoC program.

Case Study: Successful Implementation Of A VoC Program

We worked with a large US insurance company that was trying to transform its customers’ digital experience around purchasing and maintaining policies. At the start of the engagement, the client did not have a VoC program and had little experience with research. As a result, we spent a lot of time initially explaining to key stakeholders the importance and value of research and using the findings to make changes to their product as they started their digital transformation journey.

We created a slide deck and presentation outlining the key components of a VoC program, how a VoC program can be used to impact a product, methods of UX research, what type of data the methods would provide, and when to use certain methods. We also shared our recommendations based on decades of experience with similar companies. We socialized this deck through a series of group and individual meetings with key stakeholders. We had the benefit of an internal champion at the company who was able to identify and schedule time with key stakeholders. We also provided a copy of the material we’d created to socialize with people who were unable to attend our meetings or who wanted to take more time digesting information offline.

After our meetings, we fielded many questions about the process, including who would be involved, the resources required, timelines for capturing data and making recommendations, and the potential limitations of certain methods. We should have accounted for these types of questions in our initial presentation.

VoC Activity Purpose Involvement
In-Depth User Interviews One-on-one interviews that focused on identified customer’s current usages, desires, and pain points related to the current experience. Additionally, later in the product development cycle, understanding customer’s feelings towards the new product and features that should be prioritized/enhanced in future product releases. Product, sales, and marketing teams
Concept Testing One-on-one concept testing with customers to gather feedback on the high-level design concepts. Product, sales, and marketing teams
Unmoderated Concept Testing Unmoderated concept testing with customers to gather feedback on the materials provided by the business to customers. The goal was to be able to reach out to more people to increase the feedback. Product, sales, and marketing teams
Usability Testing One-on-one usability testing sessions with customers to identify behaviors, usability, uses, and challenges of the new product. Product, sales, and marketing teams
Kano Model Survey This survey is to gather customer input on features from the product backlog to help the business prioritize them for future development. Product Team
Benchmarking Survey This survey is to help understand users’ attitudes toward the digital experience that can be used to compare customers’ attitudes as enhancements are made to it. Metrics that were used include Net Promoter Score, Systematic Suability Scale, and Semantic Differential. Product, sales, and marketing teams

One large component of enhancing the customer’s digital experience was implementing a service portal. To help better understand the needs and desires of users for this service portal, we started with executing in-depth user interviews. This first VoC activity helped to show the value of VoC research to the business and how it can be used to develop a product with a user-centric approach.

Our biggest challenge during this first activity was recruiting participants. We were unable to use a third-party service to help recruit participants. As a result, we had to collect a pool of potential participants through the sales division. As mentioned before, the company didn’t have much exposure to VoC work, so while trying to execute our VoC research and implement a VoC program, any time we worked with a division in the company that hadn’t heard of VoC, we spent additional time walking through what VoC is and what we were doing. Once we explained to the sales team what we were doing, they helped with providing a list of participants for recruitment for this activity and future ones.

After we received a list of potential participants, we crafted an email with a link to a scheduling tool where potential participants could sign up for interview slots. The email would be sent through a genetic email address to over 50+ potential participants. Even though we sent multiple reminder emails to this potential list of participants, we could only gather 5–8 participants for each VoC activity.

As we conducted more VoC activities and presented our findings to larger audiences throughout the company, more divisions became interested in participating in the VoC program. For example, we conducted unmoderated concept testing for a division that was looking to redesign some PDFs. Their goal was to understand customers’ needs and preferences to drive the redesign process. Additionally, we also helped a vendor conduct usability testing for the company to understand how user-friendly an application system was. This was one way to help grow the VoC program within the company as well as their relationship with the vendor.

We needed to do more than foster a culture of gathering customer feedback. As we began to execute the VoC program more extensively within the company, we utilized methods that went beyond simply implementing feedback. These methods allowed the VoC program to continue growing autonomously.

We introduced a benchmarking survey for the new portal. This survey’s purpose was to gauge the customer experience with the new portal over time, starting even before the portal’s release. This not only served as a means to measure the customer experience as it evolved but also provided insights into the maturation of the VoC program itself.

The underlying assumption was that if the VoC program were maturing effectively, the data gathered from the customer experience benchmarking survey would indicate that customers were enjoying an improved digital experience due to changes and decisions influenced more by VoC.

Next, we focused on transferring our knowledge to the company so the VoC program could continue to mature over time without us there. From the beginning, we were transparent about our processes and the creation of material for a VoC activity. We wanted to create a collaborative environment to make sure we understand the company’s needs and questions, but also so the company could understand the process for executing a VoC activity. We accomplished this in part by involving our internal champion at the company in all of the various studies we conducted and conversations we were having with various business units.

We’d typically start with a request or hypothesis by a division of the company. For example, once the portal is launched, what are people’s opinions on the new portal, and what functionality should the business focus on? Then, we would craft draft materials of the approach and questions. In this case, we decided to execute in-depth user interviews to be able to dive deep into users’ needs, challenges, and desires.

Next, we would conduct a series of working sessions to align the questions and ensure that they still align with the company’s goals for the activity. Once we had all the materials finalized, we had them reviewed by the legal team and began to schedule and recruit participants. Lastly, we would conduct the VoC activity, synthesize the data, and create a report to present to different divisions within the company.

We started the transfer of knowledge and responsibilities to the company by slowly giving them some of these tasks related to executing a VoC activity. With each additional new task the company was in charge of, we set additional time aside to debrief and provide details on what was done well and what could be improved upon. The goal was for the individuals at the company to learn by doing and giving them incremental new tasks as they felt more comfortable. Lastly, we provided documentation to leave behind, including a help guide they could refer to when continuing to execute VoC activities.

We concluded our role managing the VoC program by handing over duties and maintenance to the internal champion who had worked with us from the beginning. We stayed engaged, offering a few hours of consulting time each month; however, we were no longer managing the program. Months later, the program is still running, with a focus on collecting feedback on updates being made to products in line with their respective roadmaps. The client has used many of the lessons we learned to continue overcoming challenges with recruiting and to effectively socialize the findings across the various teams impacted by VoC findings.

Overall, while helping to build this VoC program, we learned a lot. One of our biggest pain points was participant recruitment. The process of locating users and asking them to participate in studies was new for the company. We quickly learned that their customers didn’t have a lot of free time, and unmoderated VoC activities or surveys were ideal for the customers as they could complete them on their own time. As a result, when possible, we opted to execute a mixed-methods approach with the hope we could get more responses.

Another pain point was technology. Some of the tools we’d hoped to use were blocked by the company’s firewall, which made scheduling interviews a little more difficult. Additionally, some divisions had access to certain quantitative tools, but the licenses couldn’t easily be used across divisions, so workarounds had to be created to implement some surveys. As a result, being creative and willing to think about short-term workarounds was important when developing the VoC program.

Conclusion

Building a successful VoC program is an ongoing effort. It requires a commitment to continuously collecting, analyzing and acting on customer feedback. This can be difficult to sustain over time, as other priorities may take precedence. However, a successful VoC program is essential for any organization that is serious about improving the customer experience.

We’ve covered the importance of VoC programs for companies with digital products or services. We recommend you take the approach that makes the most sense for your team and company. We’ve provided details of starting and maintaining a VoC program, including the upfront work needed to define objectives and goals, targeting the right audience, choosing the right methods, putting this all in a framework, collecting data, data analysis, and communicating your findings effectively.

We suggest you start small and have fun growing your program. When done right, you will soon find yourself overwhelmed with requests from other stakeholders to expand your VoC to include their products or business units. Keep in mind that your ultimate goal is to create a product that resonates with users and meets their needs. A VoC program ensures you are constantly collecting relevant data and taking actionable steps to use the data to inform your product or business’s future. You can refine your VoC as you see what works well for your situation.

Additional Voice of Customer Resources

]]>
hello@smashingmagazine.com (Victor Yocco & Dana Daniels)
<![CDATA[An Efficient Design-to-Code Handoff Process Using Uno Platform For Figma]]> https://smashingmagazine.com/2023/11/design-to-code-handoff-process-uno-platform-figma/ https://smashingmagazine.com/2023/11/design-to-code-handoff-process-uno-platform-figma/ Fri, 17 Nov 2023 10:00:00 GMT Effective collaboration between designers and developers is vital for creating a positive user experience, but bridging the gap between design and code can be challenging at the best of times. The handoff process often leads to communication gaps, inconsistencies, and, most importantly, lost productivity, causing frustration for both designers and developers.

When we try to understand where most of the time is spent building software applications, we will notice that a significant amount of it is lost due to a lack of true collaboration, miscommunication, and no single “source of truth.” As a result, we see designers creating great user experiences for clients and passing on their approved designs with little input from the developers. They then attempt to recreate the designs with their developer tools, resulting in a complicated, cost-intensive process and often unimplementable designs.

Developers and integrators are supposed to closely inspect a design and try to analyze all there is to it: the margins, spacing, alignments, the types of controls, and the different visual states that the user interface might go through (a loading state, a partial state, an error state, and so on) during the interactions with it, and then try to figure out how to write the code for this interface. Traditionally, this is the design handoff process and has long become the norm.

We at Uno Platform believe that a better, more pragmatic approach must exist. So, we focused on improving the workflow and adding the option to generate the code for the user interface straight from the design while allowing developers to build upon the generated code’s foundation and further expand it.

“When designers are fully satisfied with their designs, they create a document with all the details and digital assets required for the development team to bring the product to life. This is the design handoff process.”

— UX Design Institute, “How to create a design handoff for developers

Let’s look at some of the problems associated with the handoff process and how the Uno Platform for Figma Plugin may help alleviate the pitfalls traditionally seen in this process.

Note: The Uno Platform Plugin for Figma is free, so if you’d like to try it while reading this article, you can install it right away. Uno Platform is a developer productivity platform that enables the creation of native mobile, web, desktop, and embedded apps using a single codebase. As part of its comprehensive tooling, Uno Platform offers the Uno Figma plugin, enabling easy translation of designs to code. (While Uno Platform is free and open-source, the plugin itself is free but not open-source.) If you want to explore the code or contribute to the Uno project, please visit the Uno Platform GitHub repository.

A significant factor in choosing our current development path lies in the belief that Figma outperforms the Sketch + Zeplin combination (and not only) because of its platform-agnostic nature. Being a web-based tool, Figma is more universal, while Sketch is limited to MacOS, and sharing designs with non-Mac developers necessitates using third-party software. Figma offers an all-in-one solution, making it a more convenient option.

In addition, Figma now also supports plugins, many of them assisting with the handoff process specifically. For example, Rogie King recently launched a plugin “Clippy — add attachments” that allows attachments to your Figma files, which is very useful during the handoff process. Of course, there are many others.

This and other factors “weighed” us towards Figma Design. And we aren’t alone; others have picked up Figma as their key design tool after doing some research and then trying things in practice.

“Comparing the must-haves against the features of a list of possible design apps we compiled (including Sketch, Axure RP, Framer, and more), Figma came out as the clear forerunner. We decided to proceed with a short-term trial to test Figma’s suitability, and we haven’t looked back since. It met all of our key feature requirements but surprised us with a lot of other things along the way.”

— Simon Harper, “Why we chose Figma as our primary design tool
Working with the Uno Figma Plugin

1: Get Started with Uno Platform for Figma Plugin

Uno for Figma presents a significant advantage for teams that rely on Figma as their primary design tool.

By combining the familiarity and efficiency of Figma with the capabilities of the Uno Platform, designers can seamlessly continue working in a familiar environment that they are already comfortable with while knowing their designs will be integrated practically by their development team.

Getting started with the Uno Figma plugin is a straightforward process:

  1. Install the Uno Platform (Figma to C# or XAML) plugin.
  2. Open the Uno Platform Material Toolkit design file available from the Figma Community.

With the Uno Material Toolkit, you no longer need to design many of the components from scratch as the toolkit provides UI (user interface) controls designed specifically for multi-platform, responsive applications.

Note: To use the plugin, you must create your design inside the Uno Material Toolkit Figma file, using its components. Without it, the plugin won’t be able to generate any output, making it incompatible with existing Figma designs.

2: Setting Up Your Design

Once you have installed the Uno Platform for Figma plugin and opened the Material Toolkit design file, you can use the “Getting Started” page in the file layers to set up your design. The purpose of this page is to simplify the process of defining your application’s theme, colors, and font styles.

You can also use DSP Tooling in Uno.Material for Uno Platform applications. This feature has been one of the top requests from the Uno Platform community, and it is now available both as a package and as part of the App Template Wizard.

If you’re unfamiliar with DSP (Design System Packages), it’s essentially a repository of design assets, including icons, buttons, and other UI elements, accompanied by JSON files containing design system information. With a DSP, you can craft a personalized theme that aligns with your brand, effortlessly integrate it into your Uno Platform application through the Figma plugin, and implement theme changes across the entire user interface of your application.

The ability to import custom Design System Packages (DSPs) is a significant development for Uno Platform. With this feature, designers can create and manage their own design systems, which can be shared across projects and teams. This not only saves time but also ensures consistency across all design work. Additionally, it allows designers to maintain control over the design assets, making it easier to make updates and changes as needed.

Note: The “Getting Started” page offers step-by-step instructions for modifying the colors and fonts of the user interface, including previewing your overall theme. While you can modify these later, I’d recommend doing this right at the beginning of your project for better organization.

Afterward, create a new page and ensure that you begin by using the Standard Page Template provided in the Uno Toolkit components to start the design of your application. It’s essential to remember that you will have to detach the instance from the template to utilize it.

3: Start Creating The Design

Most Toolkit Components have variants that will act as time savers, allowing a single component to contain its various states and variations, with specific attributes that you may toggle on and off.

For example, button components have a Leading Icon variant so you can use the same element with or without icons throughout the design.

The Uno Toolkit provides a comprehensive library of pre-built components and templates that come with the appropriate layer structures to generate XAML (eXtensible Application Markup Language is Microsoft’s variant of XML for describing a graphic user interface) and C# markup and allows designers to preview their designs using the Uno plugin. This helps synchronize design and development efforts, maintain project consistency, and optimize code output. Furthermore, these components can be reused, making creating and managing consistent designs across multiple projects easier.

4: Live Preview Your App

The Previewer in the Uno Platform is a powerful tool that enables designers to troubleshoot their designs and catch issues before the handoff process starts, avoiding redundancy in communications with developers. It provides a live interactive preview of the translated look, behavior, and code of the application’s design, including all styling and layout properties. Designers can interact with their design as if it is a running app, scrolling through content and testing components to see how they behave.

To preview the designed user interface, follow these steps:

  1. In Figma, select the screen you want to preview.
  2. Right-click the screen → PluginsUno Platform (Figma to C# or XAML).
  3. Once the plugin has launched, select the Preview tab.
  4. Press the Refresh button.

Getting Started With The Tutorial: First Steps

If you’re new to using the Uno Platform for Figma plugin, the first step is to install it from the Figma community. After downloading the plugin, proceed with the following steps:

  1. Navigate to the Uno Material Toolkit File in the Figma community and select Open in Figma to start a new project.
  2. Setting up your project theme first is optional but recommended. You can set your desired theme from the Getting Started page (included in the file) or import a DSP (Design System Package) file to quickly transform your theme.
  3. Create a new page. Within the Resources tab of the menu, under Components, find and select the “Standard Page Template.”
  4. Right-click on the template and select Detach the instance.

These are the initial steps for all new projects using the Uno Platform for Figma plugin and Uno Material Toolkit — not only the steps for this tutorial. This workflow will set you on the right path to creating various mobile app designs effectively.

Designing With Uno Material Toolkit

You can follow along here with the Uno Flights file template, which you can use for reference. Please note that when building your UI design, you should only use components that are part of the material toolkit file. Refer to the components page to see the list of available components.

Step 1: Search Results, Sort, And Filter

First, implement the search results and the Sort and Filter action buttons by following these steps:

  1. Add a TextBox and change the placeholder text.
  2. In the TextBox properties, toggle on a leading/trailing icon. (optional)
  3. Add Text to display the number of results and the associated text. (Use Shift + A to add them to an Auto Layout.)
  4. Add an IconButton (use the components provided by the Material Toolkit) and swap the default icon to a Filter icon. Include accompanying Text for the icon, and group them within a frame with a horizontal Auto Layout.
  5. Repeat the previous step for the filter action, adding an IconButton with a filter icon accompanied by Text and placing them in an Auto Layout.
  6. Nest both actions within another Auto Layout.
  7. Finally, group the three sections (number of results, sort action, and filter action) into an Auto Layout.
  8. Add the SearchBox and your final Layout and nest them inside Content.Scrollable.

By following these steps, you should see a result similar to the example below:

Step 2: Flight Information

The Flight Itinerary block can be divided into three sections:

  • Flight Times and the ProgressBar are included in the first Auto Layout.
  • Airport and Flight Information are organized in a separate Auto Layout.
  • Airline Information and Flight Costs are presented in a third Auto Layout.

Flight Times and ProgressBar

  1. Insert two Text elements for arrival and departure times.
  2. Locate the ProgressBar component in the Resources tab and add it between the two times created.
  3. Group the three components (arrival time, ProgressBar, departure time) into an Auto Layout.
  4. Add an icon and swap the instance with a plane icon.
  5. Select the absolute position and place the plane icon at the beginning of the ProgressBar.

Flight Info

  1. Insert Text for your flight time and flight status.
  2. Apply an Auto Layout to organize them and set it to Fill.
  3. Proceed to add Text for the Airport initials.
  4. Combine the two Texts and the previously created Auto Layout into a new horizontal Auto Layout.

Airline Information

  1. Add the necessary Text for Airline Information and pricing.
  2. Select both Text elements and apply an Auto Layout to them.
  3. Set the frame of the Auto Layout to Fill.
  4. Adjust the horizontal gap as desired.

Select the three sections you want to modify:

  1. Add a new Auto Layout.
  2. Apply a Fill color to the new layout.
  3. Adjust the vertical and horizontal spacing according to your preference.
  4. Move your component to the Standard Page Template by dragging it below the content.Scrollable layer.

Step 3. Bottom TabBar

The Bottom TabBar is relatively simple to modify and is part of the Standard Page Template. To complete this section:

  1. Select each item from the Bottom TabBar.
  2. Expand it to the lowest icon layer.
  3. In the Design tab, replace the current instance with the appropriate one.
  4. Next, select the associated Text and update it to match the corresponding menu item.

Step 4. Preview, Export, And Transition From Figma To Visual Studio

Once the user interface design is finalized in Figma, the Uno Platform Plugin enables us to render and preview our application with the click of a button. Within the Previewer, you can interact with the components, such as clicking the buttons, scrolling, toggling various functionalities, and so on.

After previewing the app, we can examine the XAML code generated in the Export tab.

Open the Uno Figma Plugin (right-click the screen → PluginsUno Platform (Figma to C# or XAML).

For XAML

  1. You can change your namespace in the first tab (Properties) under Application (optional).
  2. Go to the Export tab and select Copy to Clipboard (bottom right button).
  3. Open Visual Studio and create a new project using the Uno App Template Wizard (this is where you will choose between using XAML or C# Markup for your user interface).
  4. Open your MainPage.xaml file, remove the existing code, and paste your exported code from the Uno Figma Plugin.
  5. Change your x:class and xmlns:local namespaces.
  6. Export the color override file and paste it into your ColorPaletteOverride.xaml.

For C# Markup

  1. Go to the Export tab and select all contents from the line after this to the semicolon ; at the end. (See the screenshot below.)
  2. Copy the selected code to the clipboard (Ctrl/Cmd + C on Win/Mac).
  3. In Visual Studio, create a new project using the Uno App Template Wizard. (This is where you will choose between using XAML or C# Markup for your user interface.)
  4. Open MainPage.cs and replace all the Page contents with the copied code.
  5. To set the appropriate font size for all buttons, access the MaterialFontsOverride.cs file in the Style folder. Go to the Figma Plugin, and in the Export tab, select Fonts Override File from the dropdown menu. Copy the content in the ResourceDictionary and replace it in your MaterialFontsOverride.cs.

Here’s an example of the generated XAML (and also C#) code that you can import into Microsoft Visual Studio:

flightXAML.txt (38 kB)

flightCsharp.txt (56 kB)

Conclusion

Harmonizing design and development is no easy task, and the nuances between teams make it so there is no one-size-fits-all solution. However, by focusing on the areas that most often affect productivity, the Uno Platform for Figma tool helps enhance designer-developer workflows. It facilitates the efficient creation of high-fidelity designs, interactive prototypes, and the export of responsive code, making the entire process more efficient.

The examples provided in the article primarily showcase working with mobile design versions. However, there are no limitations in the document or the generated code that restrict you from creating suitable versions for desktops, laptops, tablets, and just the world of the World Wide Web. Specify the desired resolutions and responsive elements (and how they should behave), and the designs you create should be easily adaptable across different platforms and screen sizes.

Further Reading

  • Five is for 5X productivity. Announcing Uno Platform 5.0,” (Uno Platform )
    This article provides an overview of all the new features available in the Uno Platform, including the new Figma to C# Markup plugin feature.
  • Intro to Figma for .NET Developers,” (Uno Platform )
    This article provides an overview of Figma and its features and how .NET developers can use it together with the Uno Platform to streamline their design-to-development workflows.
  • Uno Platform 5.0 — Figma plugin, C# Markup, and Hot Reload showcase via Uno Tube Player sample app,” (YouTube)
    This is a short video highlight for Uno Platform v. 5.0, edited from the following steps in the Tube Player workshop (this is a Figma Design file which is part of the Tube Player workshop and is tailored to .NET developers specifically).
  • Uno Platform for Figma — Uno Flight speed build,” (YouTube)
    A short video that shows the making of the Uno Flight app UI compressed into only a minute and a half.
  • Building a Login Page with Uno Platform and Figma,” (Uno Platform)
    The Uno Platform’s plugin and toolkit offer a large library of ready-to-use components, allowing developers and designers to take advantage of a set of higher-level user interface controls designed specifically for multi-platform, responsive applications.
  • Building a Profile Page with Uno Platform for Figma,” (Uno Platform)
    In this tutorial, you will learn how to build a completely functional Profile page using Figma and Uno Platform and how to generate responsive and extendable XAML code.
  • Replicating Pet Adoption UI with Uno Platform and Figma,” (Uno Platform)
    This tutorial will walk you through creating a Pet Adopt user interface mobile screen and exporting your Figma designs into code, including setting up your Figma file and designing using Uno Material Toolkit components.
  • From Figma to Visual Studio — Adding Back-End Logic to Goodreads App,” (Uno Platform)
    The Uno Platform has an XAML tab (which houses the generated code for the page you created in Figma) and a Themes tab (which houses the Resource Dictionary for the page you created). This tutorial contains a working Goodreads sample and provides many details as to using the XAML and Themes tabs.
  • Replicating a Dating App UI with .NET, Uno Platform and Figma,” (Uno Platform)
    In this tutorial, you’ll learn how to use Uno Platform to create a dating app user interface, covering in detail various sections and components of the interface; at the end, you’ll also be able to export the design into Visual Studio Code.
  • Getting Started with Uno Toolkit,” (Uno Platform)
    Detailed developer documentation pages for working with Uno Platform.
  • The 12 best IDEs for programming,” Franklin Okeke
    To keep up with the fast pace of emerging technologies, there has been an increasing demand for IDEs among software development companies. We will explore the 12 best IDEs that currently offer valuable solutions to programmers.
  • Why we chose Figma as our primary design tool,” Simon Harper (Purplebricks Digital)
    Comparing the must-haves against the features of a list of possible design apps we compiled (including Sketch, Axure RP, Framer, and more), Figma came out as the clear forerunner. It met all of our key feature requirements but surprised us with a lot of other things along the way.
  • Why we switched to Figma as the primary design tool at Zomato,” Vijay Verma
    Before Figma, several other tools were used to facilitate the exchange of design mockups and updates; after Figma, the need to use other tools and services was reduced as everything comes in one single package.
  • The Best Handoff Is No Handoff,” Vitaly Friedman (Smashing Magazine)
    Design handoffs are often inefficient and painful; they cause frustration, friction, and a lot of back and forth. Can we avoid them altogether? This article discusses in detail the “No Handoff” fluid model, where product and engineering teams work on the product iteratively all the time, with functional prototyping being the central method of working together.
  • Designing A Better Design Handoff File In Figma,” Ben Shih (Smashing Magazine)
    Creating an effective handoff process from design to development is a critical step in any product development cycle. This article shares many practical tips to enhance the handoff process between design and development in product development, with guidelines for effective communication, documentation, design details, version control, and plugin usage.
  • How I use Sketch with Zeplin to Design and Specify Apps,” Marc Decerle
    Sketch is a very powerful tool in combination with Zeplin. In this article, the author describes how he organizes his Sketch documents and how he uses Sketch in conjunction with Zeplin.
  • Design System Package (DSP)
    This document describes the Design System Package structure, including details on how each internal file or folder should be used.
]]>
hello@smashingmagazine.com (Matthew Mattei)
<![CDATA[CSS Responsive Multi-Line Ribbon Shapes (Part 1)]]> https://smashingmagazine.com/2023/11/css-responsive-multi-line-ribbon-shapes-part1/ https://smashingmagazine.com/2023/11/css-responsive-multi-line-ribbon-shapes-part1/ Wed, 15 Nov 2023 10:00:00 GMT Back in the early 2010s, it was nearly impossible to avoid ribbon shapes in web designs. It was actually back in 2010 that Chris Coyier shared a CSS snippet that I am sure has been used thousands of times over.

And for good reason: ribbons are fun and interesting to look at. They’re often used for headings, but that’s not all, of course. You’ll find corner ribbons on product cards (“Sale!”), badges with trimmed ribbon ends (“First Place!”), or even ribbons as icons for bookmarks. Ribbons are playful, wrapping around elements, adding depth and visual anchors to catch the eye’s attention.

I have created a collection of more than 100 ribbon shapes, and we are going to study a few of them in this little two-part series. The challenge is to rely on a single element to create different kinds of ribbon shapes. What we really want is to create a shape that accommodates as many lines of text as you throw at them. In other words, there is no fixed dimension or magic numbers — the shape should adapt to its content.

Here is a demo of what we are building in this first part:

Sure, this is not the exact ribbon shape we want, but all we are missing is the cutouts on the ends. The idea is to first start with this generic design and add the extra decoration as we go.

Both ribbons in the demo we looked at are built using pretty much the same exact CSS; the only differences are nuances that help differentiate them, like color and decoration. That’s my secret sauce! Most of the ribbons from my generator share a common code structure, and I merely adjust a few values to get different variations.

Let’s Start With The Gradients

Any time I hear that a component’s design needs to be repeated, I instantly think of background gradients. They are perfect for creating repeatable patterns, and they are capable of drawing lines with hard stops between colors.

We’re essentially talking about applying a background behind a text element. Each line of text gets the background and repeats for as many lines of text as there happens to be. So, the gradient needs to be as tall as one line of text. If you didn’t know it, we recently got the new line height (lh) unit in CSS that allows us to get the computed value of the element’s line-height. In our case, 1lh will always be equal to the height of one line of text, which is perfect for what we need.

Note: It appears that Safari uses the computed line height of a parent element rather than basing the lh unit on the element itself. I’ve accounted for that in the code by explicitly setting a line-height on the body element, which is the parent in our specific case. But hopefully, that will be unnecessary at some point in the future.

Let’s tackle our first gradient. It’s a rectangular shape behind the text that covers part of the line and leaves breathing space between the lines.

The gradient’s red color is set to 70% of the height, which leaves 30% of transparent color to account for the space between lines.

h1 {
  --c: #d81a14;

  background-image: linear-gradient(var(--c) 70%, #0000 0);
  background-position: 0 .15lh;
  background-size: 100% 1lh;
}

Nothing too complex, right? We've established a background gradient on an h1 element. The color is controlled with a CSS variable (--c), and we’ve sized it with the lh unit to align it with the text content.

Note that the offset (.15lh) is equal to half the space between lines. We could have used a gradient with three color values (e.g., transparent, #d81a14, and transparent), but it’s more efficient and readable to keep things to two colors and then apply an offset.

Next, we need a second gradient for the wrapped or slanted part of the ribbon. This gradient is positioned behind the first one. The following figure demonstrates this with a little opacity added to the front ribbon’s color to see the relationship better.

Here’s how I approached it:

linear-gradient(to bottom right, #0000 50%, red 0 X, #0000 0);

This time, we’re using keywords to set the gradient’s direction (to bottom right). Meanwhile, the color starts at the diagonal (50%) instead of its default 0% and should stop at a value that we’re indicating as X for a placeholder. This value is a bit tricky, so let’s get a visual that illustrates what we’re doing.

The green arrow illustrates the gradient direction, and we can see the different color stops: 50%, X, and 100%. We can apply some geometry rules to solve for X:

(X - 50%) / (100% - 50%) = 70%/100%
X = 85%

This gives us the exact point for the end of the gradient’s hard color stop. We can apply the 85% value to our gradient configuration in CSS:

h1 {
  --c: #d81a14;

  background-image: 
    linear-gradient(var(--c) 70%, #0000 0), 
    linear-gradient(to bottom left, #0000 50%, color-mix(in srgb, var(--c), #000 40%) 0 85%, #0000 0);
  background-position: 0 .15lh;
  background-size: 100% 1lh;
}

You’re probably noticing that I added the new color-mix() function to the second gradient. Why introduce it now? Because we can use it to mix the main color (#d81a14) with white or black. This allows us to get darker or lighter values of the color without having to introduce more color values and variables to the mix. It helps keep things efficient!

We have all of the coordinates we need to make our cuts using the polygon() function on the clip-path property. Coordinates are not always intuitive, but I have expanded the code and added a few comments below to help you identify some of the points from the figure.

h1 {
  --r: 10px; /* control the cutout */

  clip-path: polygon(
   0 .15lh, /* top-left corner */
   100% .15lh, /* top right corner */
   calc(100% - var(--r)) .5lh, /* top-right cutout */
   100% .85lh,
   100% calc(100% - .15lh), /* bottom-right corner  */
   0 calc(100% - .15lh), /* bottom-left corner */
   var(--r) calc(100% - .5lh), /* bottom-left cutout */
   0 calc(100% - .85lh)
  );
}

This completes the first ribbon! Now, we can wrap things up (pun intended) with the second ribbon.

The Second Ribbon

We will use both pseudo-elements to complete the shape. The idea can be broken down like this:

  1. We create two rectangles that are placed at the start and end of the ribbon.
  2. We rotate the two rectangles with an angle that we define using a new variable, --a.
  3. We apply a clip-path to create the triangle cutout and trim where the green gradient overflows the top and bottom of the shape.

First, the variables:

h1 {
  --r: 10px;  /* controls the cutout */
  --a: 20deg; /* controls the rotation */
  --s: 6em;   /* controls the size */
}

Next, we’ll apply styles to the :before and :after pseudo-elements that they share in common:

h1:before,
h1:after {
  content: "";
  position: absolute;
  height: .7lh;
  width: var(--s);
  background: color-mix(in srgb, var(--c), #000 40%);
  rotate: var(--a);
}

Then, we position each pseudo-element and make our clips:

h1:before {
  top: .15lh;
  right: 0;
  transform-origin: top right;
  clip-path: polygon(0 0, 100% 0, calc(100% - .7lh / tan(var(--a))) 100%, 0 100%, var(--r) 50%);
}

h1:after {
  bottom: .15lh;
  left: 0;
  transform-origin: bottom left;
  clip-path: polygon(calc(.7lh / tan(var(--a))) 0, 100% 0, calc(100% - var(--r)) 50%, 100% 100%, 0 100%);
}

We are almost done! We still have some unwanted overflow where the repeating gradient bleeds out of the top and bottom of the shape. Plus, we need small cutouts to match the pseudo-element’s shape.

It’s clip-path again to the rescue, this time on the main element:

clip-path: polygon(
    0 .15lh,
    calc(100% - .7lh/sin(var(--a))) .15lh,
    calc(100% - .7lh/sin(var(--a)) - 999px) calc(.15lh - 999px*tan(var(--a))),
    100% -999px,
    100% .15lh,
    calc(100% - .7lh*tan(var(--a)/2)) .85lh,
    100% 1lh,
    100% calc(100% - .15lh),
    calc(.7lh/sin(var(--a))) calc(100% - .15lh),
    calc(.7lh/sin(var(--a)) + 999px) calc(100% - .15lh + 999px*tan(var(--a))),
    0 999px,
    0 calc(100% - .15lh),
    calc(.7lh*tan(var(--a)/2)) calc(100% - .85lh),
    0 calc(100% - 1lh)
);

Ugh, looks scary! I’m taking advantage of a new set of trigonometric functions that help a bunch with the calculations but probably look foreign and confusing if you’re seeing them for the first time. There is a mathematical explanation behind each value in the snippet that I’d love to explain, but it’s long-winded. That said, I’m more than happy to explain them in greater detail if you drop me a line in the comments.

Our second ribbon is completed! Here is the full demo again with both variations.

You can still find the code within my ribbons collection, but it’s a good exercise to try writing code without. Maybe you will find a different implementation than mine and want to share it with me in the comments! In the next article of this two-part series, we will increase the complexity and produce two more interesting ribbon shapes.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Temani Afif)
<![CDATA[Designing Web Design Documentation]]> https://smashingmagazine.com/2023/11/designing-web-design-documentation/ https://smashingmagazine.com/2023/11/designing-web-design-documentation/ Mon, 13 Nov 2023 13:00:00 GMT As an occasionally competent software developer, I love good documentation. It explains not only how things work but why they work the way they do. At its best, documentation is much more than a guide. It is a statement of principles and best practices, giving people the information they need to not just understand but believe.

As soft skills go in tech land, maintaining documentation is right up there. Smashing has previously explored design documents in a proposal context, but what happens once you’ve arrived at the answer and need to implement? How do you present the information in ways that are useful to those who need to crack on and build stuff?

Documentation often has a technical bent to it, but this article is about how it can be applied to digital design — web design in particular. The idea is to get the best of both worlds to make design documentation that is both beautiful and useful — a guide and manifesto all at once.

An Ode To Documentation

Before getting into the minutia of living, breathing digital design documentation, it’s worth taking a moment to revisit what documentation is, what it’s for, and why it’s so valuable.

The documentation describes how a product, system, or service works, what it’s for, why it’s been built the way it has, and how you can work on it without losing your already threadbare connection with your own sanity.

We won’t get into the nitty-gritty of code documentation. There are plenty of Smashing articles to scratch that itch:

However, in brief, here are a few of the key benefits of documentation.

Less Tech Debt

Our decisions tend to be much more solid when we have to write them down and justify them as something more formal than self-effacing code comments. Having clear, easy-to-read code is always something worth striving for, but supporting documentation can give essential context and guidance.

Continuity

We work in an industry with an exceptionally high turnover rate. The wealth of knowledge that lives inside someone’s head disappears with them when they leave. If you don’t want to reinvent the wheel every time someone moves on, you better learn to love documentation. That is where continuity lies.

Prevents Needless Repetition

Sometimes things are the way they are for very, very good reasons, and someone, somewhere, had to go through a lot of pain to understand what they were.

That’s not to say the rationale behind a given decision is above scrutiny. Documentation puts it front and center. If it’s convincing, great, people can press on with confidence. If it no longer holds up, then options can be reassessed, and courses can be altered quickly.

Documentation establishes a set of norms, prevents needless repetition, allows for faster problem-solving, and, ideally, inspires.

Two Worlds

In 1959, English author C. P. Snow delivered a seminal lecture called “The Two Cultures” (PDF). It is well worth reading in full, but the gist was that the sciences and the humanities weren’t working together and that they really ought to do so for humanity to flourish. To cordon ourselves off with specialisations deprives each group of swathes of knowledge.

“Polarisation is sheer loss to us all. To us as people and to our society. It is at the same time practical and intellectual and creative loss [...] It is false to imagine that those three considerations are clearly separable.”

— Charles Percy Snow

Although Snow himself conceded that “attempts to divide anything into two ought to be regarded with much suspicion,” the framing was and remains useful. Web development is its own meeting of worlds — between designers and engineers, art and data — and the places where they meet are where the good stuff really happens.

“The clashing point of two subjects, two disciplines, two cultures — two galaxies, so far as that goes — ought to produce creative chances.”

— Charles Percy Snow

Snow knew it, Leonardo da Vinci knew it, Steve Jobs knew it. Magic happens when we head straight for that collision.

A Common Language

Web development is a world of many different yet connected specialisations (and sub-specialisations for that matter). One of the key relationships is the one between engineers and designers. When the two are in harmony, the results can be breathtaking. When they’re not, everything and everyone involved suffers.

Digital design needs its own language: a hybrid of art, technology, interactivity, and responsiveness. Its documentation needs to reflect that, to be alive, something you can play with. It should start telling a story before anyone reads a word. Doing so makes everyone involved better: writers, developers, designers, and communicators.

Design documentation creates a bridge between worlds, a common language composed of elements of both. Design and engineering are increasingly intertwined; it’s only right that documentation reflects that.

Design Documentation

So here we are. The nitty-gritty of design documentation. We’re going to cover some key considerations as well as useful resources and tools at your disposal.

The difference between design documentation, technical documentation, and a design system isn’t always clear, and that’s fine. If things start to get a little blurry, just remember the goal is this: establish a visual identity, explain the principles behind it, and provide the resources needed to implement it as seamlessly as possible.

What should be covered isn’t the point of this piece so much as how it should be covered, but what’s listed below ought to get you started:

The job of design documentation is to weave all these things (and more) together. Here’s how.

Share The Why

When thinking of design systems and documentation, it’s understandable to jump to the whats — the fonts, the colors, the components — but it’s vital also to share the ethos that helped you to arrive at those assets at all.

Where did this all come from? What’s the vision? The guiding principles? The BBC does a good job of answering these questions for Global Experience Language (GEL), its shared design framework.

On top of being public-facing (more on that later), the guidelines and design patterns are accompanied by articles and playbooks explaining the guiding principles of the whole system.

Include proposal documents, if they exist, as well as work practices. Be clear about who the designs are built for. Just about every system has a target audience in mind, and that should be front and center.

Cutting the guiding principles is like leaving the Constitution out of a US history syllabus.

Make Its Creation Is A Collaborative Process

Design systems are big tents. They incorporate design, engineering, copywriting, accessibility, and even legal considerations — at their best anyway.

All of those worlds ought to have input in the documentation. The bigger the company/project, the more likely multiple teams should have input.

If the documentation isn’t created in a collaborative way, then what reason do you have to expect its implementation to be any different?

Use Dynamic Platforms

The days are long gone when brand guidelines printed in a book are sufficient. Much of modern life has moved online, so too should guidance for its documentation. Happily (or dauntingly), there are plenty of platforms out there, many with excellent integrations with each other.

Potential resources/platforms include:

There can be a chain of platforms to facilitate the connections between worlds. Figma can lead into Storybook, and Storybook can be integrated directly into a project. Embrace design documentation as an ecosystem of skills.

Accommodate agile, constant development by integrating your design documentation with the code base itself.

Write With Use Cases In Mind

Although the abstract, philosophical aspects of design documentation are important, the system it described is ultimately there to be used.

Consider your users’ goals. In the case of design, it’s to build things consistent with best practices. Show readers how to use the design guidelines. Make the output clear and practical. For example,

  • How to make a React component with design system fonts;
  • How to choose appropriate colors from our palette.

As we’ve covered, the design breaks down into clear, recognizable sections (typography, color, and so on). These sections can themselves be broken down into steps, the latter ones being clearly actionable:

  • What the feature is;
  • Knowledge needed for documentation to be most useful;
  • Use cases for the feature;
  • Implementation;
  • Suggested tooling.

The Mailchimp Pattern Library is a good example of this in practice. Use cases are woven right into the documentation, complete with contextual notes and example code snippets, making the implementation of best practices clear and easy.

Humanising Your Documentation, a talk by Carolyn Stranksy, provides a smashing overview of making documentation work for its users.

Documentation should help people to achieve their goals rather than describe how things work.

As StackOverflow founder Jeff Atwood once put it, “A well-designed system makes it easy to do the right things and annoying (but not impossible) to do the wrong things.”

Use Case Driven Documentation” by Tyner Blain is a great breakdown of this ethos, as is “On Design Systems: Sell The Output, Not The Workflow” by our own Vitaly Friedman.

Language

The way things are said is important. Documentation ought to be clear, accessible, and accepting.

As with just about any documentation, give words like ‘just’, ‘merely’, and ‘simply’ a wide berth. What’s simple to one person is not always to another. Documentation should inform, not belittle. “Reducing bias in your writing” by Write the Docs gives excellent guidance here.

Another thing to keep in mind is the language you use. Instead of using “he” or “she,” use “one,” “they,” “the developer,” or some such. It may not seem like a big deal to one (see what I did there), but language like that helps reinforce that your resources are for everyone.

More generally, keep the copy clear and to the point. That’s easier said than done, but there are plenty of tools out there that can help tidy up your writing:

  • Alex, a tool for catching insensitive, inconsiderate writing;
  • Write Good, an English prose linter.

In a previous Smashing article, “Readability Algorithms Should Be Tools, Not Targets,” I’ve shared a wariness about tools like Grammarly or Hemingway Editor dictating how one writes, but they’re useful tools.

Also, I can never resist a good excuse to share George Orwell’s rules for language:

  1. Never use a metaphor, simile, or other figure of speech that you are used to seeing in print.
  2. Never use a long word where a short one will do.
  3. If it is possible to cut a word out, always cut it out.
  4. Never use the passive where you can use the active.
  5. Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
  6. Break any of these rules sooner than say anything outright barbarous.

Books like The Elements of Style (PDF) by William Strunk Jr are good to be familiar with, too. Keep things informative but snappy.

Make It Beautiful

Design documentation has a lot more credibility if it’s walking the walk. If it looks like a hot mess, what are the chances of it being taken seriously?

Ideally, you should be showcasing a design ethos, not just explaining it. NASA showed way back in 1976 (PDF) that manuals can themselves be beautiful. The Graphics Standards Manual by Richard Danne and Bruce Blackburn feels like a creative work in its own right.

Show the same care and attention to detail in your design documentation that you expect users to show in applying it. Documentation should be the first and best example of it in action.

Make your documentation easy to navigate and search. The most wonderful resources in the world aren’t doing anyone much good if they can’t be found. It’s also a splendid opportunity to show information architecture best practice in action too.

Publish it

Once you’ve gone through the trouble of creating a design system and explaining how it works, why keep that to yourself? Publishing documentation and making it freely available for anyone to browse is a fantastic final polish.

Here at the Guardian, for example, our Source design system Storybook can be viewed by anyone, and its code is publicly available on GitHub. As well as being a proving ground for the system itself, it creates a space for knowledge sharing.

Here are just a few fantastic examples of publicly available design documentation:

There are plenty more where these came from in the Design Systems Gallery — a fantastic place to browse for inspiration and guidance.

What’s more, if there are stories from the formation of your system, writing articles or blog posts are also totally legit ways of documenting it. What did the New York Times do when they developed a design system? They wrote an article about it, of course.

Publishing design documentation — in all its forms — is a commitment, but it’s also a statement of purpose. Why not share something beautiful, right?

And Maintain It

This is all well and good, I hear you say, arms crossed and brow furrowed, but who’s going to keep all this stuff up to date? That’s all the time that could be spent making things.

I hear you. There are reasons that Tweets (Xs?) like this make the rounds from time to time:

Yes, it requires hard work and vigilance. The time, effort, and heartache you’ll save by having design documentation will be well worth the investment of those same things.

The better integrated the documentation is with the projects it guides, the more maintenance will take care of itself. As components and best practices change, as common issues arise and are ironed out, the system and its documentation can evolve in kind.

To spare you the suspense, your design documentation isn’t going to be perfect off the bat. There will be mistakes and situations that aren’t accounted for, and that’s fine. Own them. Acknowledge blindspots. Include ways for users to give feedback.

As with most things digital, you’re never really “done.”

Start Small

Such thorough, polished design documentation can almost be deterrents, something only those with deep pockets can make. It may also seem like an unjustifiable investment of time. Neither has to be true.

Documentation of all forms saves time in the long run, and it makes your decisions better. Whether it’s a bash script or a newsletter signup component, you scrutinize it that little bit more when you commit to it as a standard rather than a one-off choice. Let a readme-driven ethos into your heart.

Start small. Choose fonts and colors and show them sitting together nicely on your repo wiki. That’s it! You’re underway. You will grow to care for your design documentation as you care for the project itself because they are part of each other.

Go forth and document!

]]>
hello@smashingmagazine.com (Frederick O’Brien)
<![CDATA[Creating Accessible UI Animations]]> https://smashingmagazine.com/2023/11/creating-accessible-ui-animations/ https://smashingmagazine.com/2023/11/creating-accessible-ui-animations/ Fri, 10 Nov 2023 10:00:00 GMT Ever since I started practicing user interface design, I’ve always believed that animations are an added bonus for enhancing user experiences. After all, who hasn’t been captivated by interfaces created for state-of-the-art devices with their impressive effects, flips, parallax, glitter, and the like? It truly creates an enjoyable and immersive experience, don’t you think?

Mercado Libre is the leading e-commerce and fintech platform in Latin America, and we leverage animations to guide users through our products and provide real-time feedback. Plus, the animations add a touch of fun by creating an engaging interface that invites users to interact with our products.

Well-applied and controlled animations are capable of reducing cognitive load and delivering information progressively — even for complex flows that can sometimes become tedious — thereby improving the overall user experience. Yet, when we talk about caring for creating value for our users, are we truly considering all of them?

After delving deeper into the topic of animations and seeking guidance from our Digital Accessibility team, my team and I have come to realize that animations may not always be a pleasant experience for everyone. For many, animations can generate uncomfortable experiences, especially when used excessively. For certain other individuals, including those with attention disorders, animations can pose an additional challenge by hindering their ability to focus on the content. Furthermore, for those afflicted by more severe conditions, such as those related to balance, any form of motion can trigger physical discomfort manifested as nausea, dizziness, and headaches.

These reactions, known as vestibular disorders, are a result of damage, injury, or illnesses in the inner ear, which is responsible for processing all sensory information related to balance control and eye movements.

In more extreme cases, individuals with photosensitive epilepsy may experience seizures in response to certain types of visual stimuli. If you’d like to learn more about motion sensitivity, the following links are a nice place to start:

How is it possible to strike a balance between motion sensitivities and our goal of using animation to enhance the user interface? That is what our team wanted to figure out, and I thought I’d share how we approached the challenge. So, in this article, we will explore how my team tackles UI animations that are inclusive and considerate of all users.

We Started With Research And Analysis

When we realized that some of our animations might cause annoyance or discomfort to users, we were faced with our first challenge: Should we keep the animations or remove them altogether? If we remove them, how will we provide feedback to our users? And how will not having animations impact how users understand our products?

We tackled this in several steps:

  1. We organized collaborative sessions with our Digital Accessibility team to gain insights.
  2. We conducted in-depth research on the topic to learn from the experiences and lessons of other teams that have faced similar challenges.

Note: If you’re unfamiliar with the Mercado Libre Accessibility Team’s work, I encourage you to read about some of the things they do over at the Mercado Libre blog.

We walked away with two specific lessons to keep in mind as we considered more accessible UI animations.

Lesson 1: Animation ≠ Motion

During our research, we discovered an important distinction: Animation is not the same as motion. While all moving elements are animations, not every animated element necessarily involves a motion as far as a change in position.

The Web Content Accessibility Guidelines (WCAG) include three criteria related to motion in interfaces:

  1. Pause, stop, and hide
    According to Success Criterion 2.2.2 (Level AA), we ought to allow users to pause, stop, or hide any content that moves, flashes, or scrolls, as well as those that start or update automatically or that last longer than five seconds and is presented in parallel with other content.
  2. Moving or flashing elements
    Success Criterion 2.3 includes guidelines for avoiding seizures and negative physical reactions, including 2.3.1 (Level A) and 2.3.2 (Level AAA) for avoiding intermittent animations that flash more than three times per second as they could trigger seizures.
  3. Animation from interactions
    Success Criterion 2.3.3 specifies that users should be able to interact with the UI without solely relying on animations. In other words, the user should be able to stop any type of movement unless the animation is essential for functionality or conveying information.

These are principles that we knew we could lean on while figuring out the best approach for using animations in our work.

Lesson 2: Rely On Reduced Motion Preferences

Our Digital Accessibility team made sure we are aware of the prefers-reduced-motion media query and how it can be used to prevent or limit motion. MacOS, for example, provides a “Reduce motion” setting in the System Settings.

As long as that setting is enabled and the browser supports it, we can use prefers-reduced-motion to configure animations in a way that respects that preference.

:root {
  --animation-duration: 250ms; 
}

@media screen and (prefers-reduced-motion: reduce), (update: slow) {
  /* Increase duration to slow animation when a user requests a reduced animation experience */
.animated {
    --animation-duration: 0 !important; 
  }
}

Eric Bailey is quick to remind us that reduced motion is not the same as no motion. There are cases where removing animation will prevent the user’s understanding of the content it supports. In these cases, it may be more effective to slow things down rather than remove them completely.

:root {
  --animation-duration: 250ms; 
}

@media screen and (prefers-reduced-motion: reduce), (update: slow) {
  /* Increase duration to slow animation when reduced animation is preferred */
  * {
    --animation-duration: 6000ms !important; 
  }
}

Armed with a better understanding that animation doesn’t always mean changing positions and that we have a way to respect a user’s motion preferences, we felt empowered to move to the next phase of our work.

We Defined An Action Plan

When faced with the challenge of integrating reduced motion preferences without significantly impacting our product development and UX teams, we posed a crucial question to ourselves: How can we effectively achieve this without compromising the quality of our products?

We are well aware that implementing broad changes to a design system is not an easy task, as it subsequently affects all Mercado Libre products. It requires strategic and careful planning. That said, we also embrace a mindset of beta and continuous improvement. After all, how can you improve a product daily without facing new challenges and seeking innovative solutions?

With this perspective in mind, we devised an action plan with clear criteria and actionable steps. Our goal is to seamlessly integrate reduced motion preferences into our products and contribute to the well-being of all our users.

Taking into account the criteria established by the WCAG and the distinction between animation and motion, we classified animations into three distinct groups:

  1. Animations that do not apply to the criteria;
  2. Non-essential animations that can be removed;
  3. Essential animations that can be adapted.

Let me walk you through those in more detail.

1. Animations That Do Not Meet The Criteria

We identified animations that do not involve any type of motion and, therefore, do not require any adjustments as they did not pose any triggers for users with vestibular disorders or reduced motion preferences.

Animations in this first group include:

  • Objects that instantly appear and disappear without transitions;
  • Elements that transition color or opacity, such as changes in state.

A button that changes color on hover is an example of an animation included in this group.

Button changing color on mouse hover. (Large preview)

As long as we are not applying some sort of radical change on a hover effect like this — and the colors provide enough contrast for the button label to be legible — we can safely assume that it is not subject to accessibility guidelines.

2. Unessential Animations That Can Be Removed

Next, we categorized animations with motions that were unessential for the interface and contrasted them with those that did add context or help navigate the user. We consider unessential animations to be those that are not crucial for understanding the content or state of the interface and that could cause discomfort or distress to some individuals.

This is how we defined animations that are included in this second group:

  • Animated objects that take up more than one-third of the screen or move across a significant distance;
  • Elements with autoplay or automatic updates;
  • Parallax effects, multidirectional movements, or movements along the Z-axis, such as changes in perspective;
  • Content with flashes or looping animations;
  • Elements with vortex, scaling, zooming, or blurring effects;
  • Animated illustrations, such as morphing SVG shapes.

These are the animations we decided to completely remove when a user has enabled reduced motion preferences since they do not affect the delivery of the content, opting instead for a more accessible and comfortable experience.

Some of this is subjective and takes judgment. There were a few articles and resources that helped us define the scope for this group of animations, and if you’re curious, you can refer to them in the following links:

For objects that take up more than one-third of the screen or move position across a significant distance, we opted for instant transitions over smooth ones to minimize unnecessary movements. This way, we ensure that crucial information is conveyed to users without causing any discomfort yet still provide an engaging experience in either case.

Comparing a feedback screen with animations that take up more than one-third of the screen versus the same screen with instant animations. (Large preview)

Other examples of animations we completely remove include elements that autoplay, auto-update, or loop infinitely. This might be a video or, more likely, a carousel that transitions between panels. Whatever the case, the purpose of removing movement from animations that are “on” by default is that it helps us conform to WCAG Success Criterion 2.2.2 (Level AA) because we give the user absolute control to decide when a transition occurs, such as navigating between carousel panels.

Additionally, we decided to eliminate the horizontal sliding effect from each transition, opting instead for instantaneous changes that do not contribute to additional movement, further preventing the possibility of triggering vestibular disorders.

Comparing an auto-playing carousel with another carousel that incorporates instant changes instead of smooth transitions. (Large preview)

Along these same lines, we decided that parallax effects and any multidirectional movements that involve scaling, zooming, blurring, and vortex effects are also included in this second group of animations that ought to be replaced with instant transitions.

Comparing a card flip animation with smooth transitions with one that transitions instantly. (Large preview)

The last type of animation that falls in this category is animated illustrations. Rather than allowing them to change shape as they normally would, we merely display a static version. This way, the image still provides context for users to understand the content without the need for additional movement.

Comparing an animated illustration with the same illustration without motion. (Large preview)

3. Essential Animations That Can Be Adapted

The third and final category of animations includes ones that are absolutely essential to use and understand the user interface. This could potentially be the trickiest of them all because there’s a delicate balance to strike between essential animation and maintaining an accessible experience.

That is why we opted to provide alternative animations when the user prefers reduced motion. In many of these cases, it’s merely a matter of adjusting or reducing the animation so that users are still able to understand what is happening on the screen at all times, but without the intensity of the default configuration.

The best way we’ve found to do this is by adjusting the animation in a way that makes it more subtle. For example, adjusting the animation’s duration so that it plays longer and slower is one way to meet the challenge.

The loading indicator in our design system is a perfect case study. Is this animation absolutely necessary? It is, without a doubt, as it gives the user feedback on the interface’s activity. If it were to stop without the interface rendering updates, then the user might interpret it as a page error.

Rather than completely removing the animation, we picked it apart to identify what aspects could pose issues:

  • It could rotate considerably fast.
  • It constantly changes scale.
  • It runs in an infinite loop until it vanishes.
The loading indicator. (Large preview)

Considering the animation’s importance in this context, we proposed an adaptation of it that meets these requirements:

  • Reduce the rotation speed.
  • Eliminate the scaling effect.
  • Set the maximum duration to five seconds.
Comparing the loading indicator with and without reduced motion preferences enabled. (Large preview)

The bottom line:

Animation can be necessary and still mindful of reduced motion preferences at the same time.

This is the third and final category we defined to help us guide our decisions when incorporating animation in the user interface, and with this guidance, we were able to tackle the third and final phase of our work.

We Expanded It Across All Our Products

After gaining a clear understanding of the necessary steps in our execution strategy, we decided to begin integrating the reduced motion preferences we defined in our design system across all our product interfaces. Anyone who manages or maintains a design system knows the challenges that come with it, particularly when it comes to implementing changes organically without placing additional burden on our product teams.

Our approach was rooted in education.

Initially, we focused on documenting the design system, creating a centralized and easily accessible resource that offered comprehensive information on accessibility for animations. Our focus was on educating and fostering empathy among all our teams regarding the significance of reduced motion preferences. We delved into the criteria related to motion, how to achieve it, and, most importantly, explaining how our users benefit from it.

We also addressed technical aspects, such as when the design system automatically adapts to these preferences and when the onus shifts to the product teams to tailor their experiences while proposing and implementing animations in their projects. Subsequently, we initiated a training and awareness campaign, commencing with a series of company-wide presentations and the creation of accessibility articles like the one you’re reading now!

Conclusion

Our design system is the ideal platform to apply global features and promote a culture of teamwork and consistency in experiences, especially when it comes to accessibility. Don’t you agree?

We are now actively working to ensure that whenever our products detect the default motion settings on our users’ devices, they automatically adapt to their needs, thus providing enhanced value in their experiences.

How about you? Are you adding value to the user experience of your interfaces with accessible animation? If so, what principles or best practices are you using to guide your decisions, and how is it working for you? Please share in the comments so we can compare notes.

]]>
hello@smashingmagazine.com (Oriana García)
<![CDATA[WordPress Playground: From 5-Minute Install To Instant Spin-Up]]> https://smashingmagazine.com/2023/11/wordpress-playground-5-minute-install-instant-spin-up/ https://smashingmagazine.com/2023/11/wordpress-playground-5-minute-install-instant-spin-up/ Thu, 09 Nov 2023 12:00:00 GMT Many things have changed in WordPress over the years, but installation has largely remained the same: download WordPress, drop it on a server, create a database, sprinkle in some configuration, and presto, we have a WordPress site. This process was once lovingly referred to as the “famous five-minute install,” although that moniker seems to have faded with time, particularly as many hosting providers offer a more streamlined experience.

But what if WordPress didn’t require any setup at all? As in, you tap a link, and WordPress spins up a site for you right there, on demand? That’s probably difficult to imagine, considering WordPress runs on top of PHP, MySQL databases, and Apache. It’s not the most portable system.

That’s the aim of WordPress Playground, which got its first public boost when Matt Mullenweg introduced it during State of Word 2022.

Notice how the URL is a subdomain of a TasteWP-related top-level domain: hangingpurpose.s1-tastewp.com. It generates an instance on the multi-site network and establishes a URL for it based on a randomized naming system.

There’s a giant countdown timer on the screen that indicates when the site is scheduled to expire. That makes sense, right? Allowing anyone and everyone to create a site on the spot without so much as a login could become taxing on the server, so allowing sites to self-destruct on a schedule is likely as much to do with self-preservation as it does economics.

Speaking of economics, the countdown timer is immediately followed by a call to action to upgrade, which buys you permanence, extra server space, and customer support.

Without upgrading, though, you are only allowed two free instant sites. But if you create an account and log into TasteWP, then you can create up to six test sites on a free pricing tier.

That’s a look at the “quick” onboarding, but TasteWP does indeed have a more robust way to spin up a WordPress testing site with a set of advanced configurations, including which WordPress version to use with which version of PHP, settings you might normally define in wp-config.php, and options for adding specific themes and plugins.

So, how does that compare to WordPress Playground? Perhaps the greatest difference is that a TasteWP site is connected to the internet. It’s not a WordPress simulation, but an actual instance with a URL you can link up and share with others… as long as the site hasn’t expired. That could very well be enough of a differentiation to warrant more players in this space, even with WordPress Playground hanging around.

I wanted to give you a sense of what’s already offered before actually unboxing WordPress Playground. Now that we know what else is out there let’s turn our attention back to Playground and explore it.

Starting Up WordPress Playground

One of the first interesting things about WordPress Playground is that it is available in not just one but several places. I wouldn’t liken it completely to a service like TasteWP, where you create an account to create and manage WordPress instances. It’s more like a developer tool, one that you can reach for when testing your work in a WordPress environment.

You can simply hit the playground.wordpress.net URL in your browser to launch a new site on the spot. Or, you can launch an instance from the command line. Perhaps you prefer to use the official Chrome extension instead. Whatever the case, let’s look at those options.

1. Using The WordPress Playground URL

This is the most straightforward way to get a WordPress Playground instance up and running. That’s because all you do is visit the playground.wordpress.net address in the browser, and a WordPress site is created immediately.

This is exactly how the WordPress Playground demo works, prompting you to click a button to open a new WordPress site. In fact, try clicking the following button to create one now.

Create A WordPress Site

If you want to use a specific version of WordPress and PHP in your Playground, all it takes is adding a couple of parameters to the URL. For example, we can instruct Playground to run WordPress 6.2 on PHP 8.2 with the following URL:

https://playground.wordpress.net/?php=8.2&wp=6.2

You can even try out the developmental versions of WordPress using Playground by using the following parameter:

https://playground.wordpress.net/?wp=beta

2. Using The GitHub Repository

True to the WordPress ethos, WordPress Playground is very much an open-source project. The repo is available over at GitHub, and we can pull it into a local environment and use WordPress Playground right from a terminal.

First, let’s clone the repository from the command line:

git clone https://github.com/WordPress/wordpress-playground.git

There is a slightly faster alternative that fetches just the latest revision:

git clone -b trunk --single-branch --depth 1 git@github.com:WordPress/wordpress-playground.git

Now that we have the WordPress Playground package in our local environment, we can formally install it:

cd wordpress-playground
npm install
npm run dev

Once the local server is running, we should get a URL from the terminal that we can use to access the new Playground instance, likely pointed to http://localhost:5400/website-server/.

We are also able to set which versions of WordPress and PHP to use in the virtual environment by adding a couple of instructions to the command. For example, this command triggers a new WordPress 5.9 instance running on PHP 7.4:

wp-now start --wp=5.9 --php=7.4

3. Using wp-now In The Command Line

An even quicker way to get Playground running from the command line is to globally install the wp-now CLI tool:

npm install -g @wp-now/wp-now

This way, we can create a new Playground instance anytime you want with a single command:

wp-now start

Be sure that you’re using Node 18 or higher. Otherwise, you’re likely to bump into some errors. Once the command executes, however, the browser will automatically open a new tab pointing to the new instance. You’re already signed into WordPress and everything!

We can configure the environment just as we could with the npm package:

wp-now start --wp=5.9 --php=7.4

A neat thing about this method is that there are several different “modes” you can run this in, and which one you use depends on the directory you’re in when running the command. For example, if you run the command from a directory that already contains WordPress, then Playground will automatically recognize that and run the directory as a full WordPress installation. Or, it’s possible to execute the command from a directory that contains nothing but an index.php file, and Playground will start the server and run requests through that file.

There are other options, including modes for theme, plugin, wp-content, and wordpress-develop, that are worth checking out in the documentation.

4. Using The Visual Studio Code Extension

WordPress Playground is also available as a Visual Studio Code extension. It provides a nice one-click process to launch a local WordPress site.

Installing the extension adds a WordPress icon to the sidebar menu that, when clicked, opens a panel for launching a new WordPress Playground site.

Open a project folder, click the “Start WordPress Server,” and the Playground extension boots up a new site on the spot. The extension also provides server details, including the local URL, the mode it’s in, and settings to change which versions of WordPress and PHP are in use.

One thing I noticed while poking at the instance is that it automatically installs and activates the SQLite Database Integration plugin. Obviously, that’s a required component for things to work, but I thought it was worth pointing out that the installation does indeed include at least one pre-installed plugin right out of the gate.

5. Using A Chrome Extension To Preview Themes & Plugins

Have you ever found yourself perusing the WordPress Theme Directory and wanting to take a particular theme out for a test drive? There’s already a “Preview” button baked right into the directory to do exactly that.

That’s nice, as it opens up the theme in a frame that looks a lot like the classic WordPress Customizer.

But how cool would it be to really open up the theme and see what it is like to do actual tasks with it in the WordPress admin, such as creating a post, editing a page, or exploring its block patterns?

That is what the “Open in WordPress Playground” extension for Chrome can do. It literally adds a button to “Preview” a theme in a fresh WordPress Playground instance that, when clicked, allows you to interact with the theme in a real WordPress environment.

I tried out the extension, and it worked as described, and not only that, but it works with the WordPress Plugin Directory as well. In other words, it’s now possible to try a new plugin on the spot without having to install, activate, and test it yourself in some sandbox or, worse, your live or staging WordPress environments.

This is a potential game-changer as far as lowering the barrier to entry for using WordPress and for theme and plugin developers offering a convenient way to provide users with a demo experience. I can easily imagine a future where paid commercial plugins adopt a similar user experience to help reduce refunds from customers merely wanting to try a plugin before formally committing to it.

The extension is available free of charge in the Chrome Web Store, but you can check out the source code in its GitHub repository as well. While we’re on it, it’s worth noting that this is a third-party extension rather than an official WordPress or Automattic release.

The Default Playground Site

No matter which Playground method you use, the instances that spin up are nearly identical. For example, all of the methods we covered have the WordPress Twenty Twenty-Three theme installed and activated by default. That makes a lot of sense: a standard WordPress installation does the same.

Similarly, all of the instances we covered make use of the SQLite Database Integration plugin developed by the WordPress Performance Team. This also makes sense: we need the plugin to establish a database. It also sounds like from the plugin description that the intent is to eventually integrate the plugin into WordPress Core, so perhaps we’ll eventually see zero plugins in a default Playground instance at some point.

There are a few differences between instances. They’re not massive, but worth calling out so you know what you are activating or have available when using a particular method to create a WordPress instance. The following table breaks down the current components included in each method at the time of this writing:

Method WordPress Version PHP Version Themes Plugins
WordPress Playground website 6.3.2 8.0
  • Twenty Twenty-Three (active)
  • SQLite Database Integration (active)
GitHub repo 6.3.2 8.0
  • Twenty Twenty-Three (active)
  • SQLite Database Integration (active)
wp-now package 6.3.2 8.0.10-dev
  • Twenty Twenty-Three (active)
  • Twenty Twenty-Two
  • Twenty Twenty-One
  • Akismet
  • Hello Dolly
  • SQLite Database Integration (active)
VS Code extension 6.3.2 7.4
  • Twenty Twenty-Three (active)
  • Twenty Twenty-Two
  • Twenty Twenty-One
  • Akismet
  • Hello Dolly
  • SQLite Database Integration (active)
Chrome extension 6.3.2 8.0
  • Twenty Twenty-Three (active)
  • SQLite Database Integration (active)

And, of course, any other differences would come from how you configure an instance. For example, if you run the wp-now package on the command line when you’re in a directory with WordPress and several themes and plugins installed, then those themes and plugins will be available to activate and use. Similarly, using the Chrome Extension on any WordPress Theme Directory page or Plugin Directory page will install that particular theme or plugin.

Installing Themes, Plugins, and Block Patterns

In a standard WordPress installation, you might log into the WordPress admin, navigate to AppearanceThemes, and install a new theme straight from the WordPress Theme Directory. That’s because your site has a web connection and is able to pull things in from WordPress.org. Since a WordPress Playground instance from the WordPress Playground website (which is essentially the same as the Chrome extension) is not technically connected to the internet, there is no way to install plugins and themes to it.

If you want the same sort of point-and-click experience in your Playground site that you would get in a standard WordPress installation, then go with the GitHub repo, the wp-now package, or the VS Code extension. Each of these is indeed connected to the internet and is able to install themes and plugins directly from the WordPress admin.

You may notice a note about using the Query API to install a theme or plugin to a WordPress Playground instance that is disconnected from the web:

“Playground does not yet support connecting to the themes directory yet. You can still upload a theme or install it using the Query API (e.g. ?theme=pendant).”

That’s right! We’re still able to load in whatever theme we want by passing the theme’s slug into the Playground URL used to generate the site. For example,

https://playground.wordpress.net/?theme=ollie

The same goes for plugins:

https://playground.wordpress.net/?plugin=jetpack

And if we want to bundle multiple plugins, we can pass in each plugin as a separate parameter chain with an ampersand (&) in the URL:

It does not appear that we can do the same thing with themes. If you’re testing several themes in a single instance, then it’s probably best to use the wp-now package or the VS Code extension when pointing at a directory that already includes those themes.

What about block patterns, you ask? We only get two pre-defined patterns in a default WordPress Playground instance created on Playground’s site: Posts and Call to Action.

That’s because block patterns, too, are served to the WordPress admin from an internet connection. We get a much wider selection of options when creating an instance using any of the methods that establish a local host connection.

There appears to be no way, unfortunately, to import patterns with the Query API like we can for themes and plugins. The best way to bring in a new pattern, it seems, is to either bundle them in the theme you are using (or pointing to) or manually navigate to the Block Pattern Directory and use the “Copy” option to paste a pattern into the page or post you are testing in Playground.

Importing & Exporting Playgrounds

The transience of a WordPress Playground instance is its appeal. The site practically evaporates into thin air with the trigger of a page refresh. But what if you actually want to preserve an instance? Perhaps you need to come back to your work later. Or maybe you’re working on a visual tweak and want to demo it for your team. Playground instances can indeed be exported and even imported into other instances.

Open up a new WordPress site over at the playground.wordpress.net and locate the Upload and Download icons at the top-right corner of the frame.

No worries, this is not a step-by-step tutorial on how to click buttons. The only thing you really need to know is that these buttons are only available in instances created at the WordPress Playground site or when using the Chrome Extension to preview themes and plugins at WordPress.org.

What’s more interesting is what we get when exporting an instance. We get a ZIP file — wordpress-playground.zip to be exact — as you might expect. Extract that, and what we have is the entire website, including the full WordPress installation. It resembles any other standard WordPress project with a wp-content directory that contains the source files for the installed themes and plugins, as well as media library uploads.

The only difference I could spot between this WordPress Playground package and a standard project is that Playground provides the SQLite database in the export, also conveniently located in the wp-content directory.

This is a complete WordPress project. Now that we have it and have confirmed it has everything we would expect a WordPress site to have, we can use Playground’s importing feature to replicate the exported site in a brand-new WordPress Playground instance. Click the Upload icon in the frame of the new instance, then follow the prompts to upload the ZIP file we downloaded from the original instance.

You can probably guess what comes next. If we can export a complete WordPress site with Playground, we can not only import that site into a new Playground instance but import it to a hosting provider as well.

In other words, it’s possible to use Playground as a testing ground for development and then ship it to a production or staging environment when ready. Similarly, the exported files can be committed to a GitHub repo where your production files are, and that triggers a fresh build in production. However you choose to roll!

Sharing Playgrounds

There are clear benefits to being able to import and export Playground sites. WordPress has never been the more portable system. You know that if you’ve migrated WordPress sites and data. But when WordPress is able to move around as freely as it does with Playground, it opens up new possibilities for how we share work.

Sharing With The Query API

We’ve been using the Query API in many examples. It’s extremely convenient in that you append parameters on the WordPress Playground site, hit the URL, and a site spins up with everything specified.

The WordPress Playground site is hosted, so sharing a specific configuration of a Playground site only requires you to share a URL with the site’s configurations appended as parameters. For example. this link shares the Blue Note theme configured with the Gutenberg plugin:

We can do a little more than that, like link directly to the post editor:

Even better, let’s link someone to the theme’s templates in the Site Editor:

Again, there are plenty more parameters than what we have explored in this article that are worth checking out in the WordPress Playground documentation.

Sharing With An Embedded iFrame

We already know this is possible because the best example of it is the WordPress Playground developer page. There’s a Playground instance running and embedded directly on the page. Even when you spin up a new Playground instance, you’re effectively running an iframe within an iframe.

Let’s say we want to embed a WordPress site configured with the Pendant theme and the Gutenberg plugin:

<iframe width="800" height="650" src="https://playground.wordpress.net/?plugin=gutenberg&theme=pendant&mode=seamless" allowfullscreen></iframe>

So, really, what we’re doing is using the source URL in a different context. We can share the URL with someone, and they get to access the configured site in a browser. In this case, however, we are dropping the URL into an iframe element in HTML, and the Playground instance renders on the page.

Not to get too meta, but it’s pretty neat that we can log into a WordPress production site, create a new page, and embed a Playground instance on the page with the Custom HTML Block:

What I like about sharing Playground sites this way is that the instance is effectively preserved and always accessible. Sure, the data will not persist on a page refresh, but create the URL once, and you always have a copy of it previewed on another page that you host.

Speaking of which, WordPress Playground can be self-hosted. You have to imagine that the current Playground API hosted at playground.wordpress.net will get overburdened with time, assuming that Playground catches on with the community. If their server is overworked, I expect that the hosted API will either go away (breaking existing instances) or at least be locked for creating new instances.

That’s why self-hosting WordPress Playground might be a good idea in the long run. I can see WordPress developers and agencies reaching for this to provide customers and clients with demo work. There’s so much potential and nuance to self-hosting Playground that it might even be worth its own article.

The documentation provides a list of parameters that can used in the Playground URL.

Sharing With JSON Blueprints

This “modern” era of WordPress is all about block-based layouts that lean more heavily into JaveScript, where PHP has typically been the top boss. And with this transition, we gained the ability to create entire WordPress themes without ever opening a template file, thanks to the introduction of theme.json.

Playground can also be configured with structured data. In fact, you can see the Playground website’s JSON configurations via this link. It’s pretty incredible that we can both configure a Playground site without writing code and share the file with others to sync environments.

Here is an example pulled directly from the Playground docs:

{
  "$schema": "https://playground.wordpress.net/blueprint-schema.json",
  "landingPage": "/wp-admin/",
  "preferredVersions": {
"php": "8.0",
"wp": "latest"
},
"steps": [{
"step": "login",
"username": "admin",
"password": "password"
}] }

We totally can send this file to someone to clone a site we’re working on. Or, we can use the file in a self-hosted context, and others can pull it into their own blueprint.

Interestingly, we can even ditch the blueprint file altogether and write the structured data as URL fragments instead:

That might get untenable really fast, but it is nice that the WordPress Playground team is thinking about all of the possible ways we might want to port WordPress.

Advanced Playground Configurations

Up to now, we’ve looked at a variety of ways to configure WordPress Playground using APIs that are provided by or based on playground.wordpress.net. It’s fast, convenient, and pretty darn flexible for something so new and experimental.

But let’s say you need full control to configure a Playground instance. I mean everything, from which themes and plugins are preinstalled to prepublished pages and posts, defining php.ini memory limits, you name it. The JavaScript API is what you’ll need because it is capable of executing PHP code, make requests, manage files and directories, and configuring parts of WordPress that none of the other approaches offer.

The JavaScript API is integrated into an iframe and uses the @wp-playground/client npm package. The Playground docs provide the following example in its “Quick Start” guide.

<iframe id="wp" style="width: 100%; height: 300px; border: 1px solid #000;"></iframe>

<script type="module">
  // Use unpkg for convenience
  import { startPlaygroundWeb } from 'https://unpkg.com/@wp-playground/client/index.js';

  const client = await startPlaygroundWeb({
    iframe: document.getElementById('wp'),
    remoteUrl: https://playground.wordpress.net/remote.html,
  });
  // Let's wait until Playground is fully loaded
  await client.isReady();
</script>

This is an overly simplistic example that demonstrates how the JavaScript API is embedded in a page in an iframe. The Playground docs provide a better example of how PHP is used within JavaScript to do things, like execute a file pointed at a specific path:

php.writeFile(
  "/www/index.php",
  `<?php echo "Hello world!";"`
);
const result = await php.run({
  scriptPath: "/www/index.php"
});
// result.text === "Hello world!"

Adam Zieliński and Thomas Nattestad offer a nicely commented example with multiple tasks in the article they published over at web.dev:

import {
  connectPlayground,
  login,
  connectPlayground,
} from '@wp-playground/client';

const client = await connectPlayground(
  document.getElementById('wp'), // An iframe
  { loadRemote: 'https://playground.wordpress.net/remote.html' },
);
await client.isReady();

// Login the user as admin and go to the post editor:
await login(client, 'admin', 'password');
await client.goTo('/wp-admin/post-new.php');

// Run arbitrary PHP code:
await client.run({ code: '<?php echo "Hi!"; ?>' });

// Install a plugin:
const plugin = await fetchZipFile();
await installPlugin(client, plugin);

Once again, the scope and breadth of using the JavaScript API for advanced configurations is yet another topic that might warrant its own article.

Wrapping Up

WordPress Playground is an excellent new platform that’s an ideal testing environment for WordPress themes, plugins… or even WordPress itself. Despite the fact that it is still in its early days, Playground is already capable of some pretty incredible stuff that makes WordPress more portable than ever.

We looked at lots of ways that Playground accomplishes this. Just want to check out a new theme? Use the playground.wordpress.net URL configured with parameters supported by the Query API, or grab the Chrome extension. Need to do a quick test of your theme in a different PHP environment? Use the wp-now package to spin up a test site locally. Want to let others demo a plugin you made? Embed Playground in an iframe on your site.

WordPress Playground is an evolving space, so keep your eye on it. You can participate in the discussion and request a feature through a pull request or report an issue that you encounter in your testing. In the meantime, you may want to be aware of what the WordPress Playground team has identified as known limitations of the service:

  • No access to plugins and theme directories in the browser.
    The theme and plugin directories are not accessible due to the fact that Playgrounds are not connected to the internet, but are virtual environments.
  • Instances are destroyed on a browser refresh.
    Because WordPress Playground uses a browser-based temporary database, all changes and uploads are lost after a browser refresh. If you want to preserve your changes, though, use the export feature to download a zipped archive of the instance. Meanwhile, this is something the team is working on.
  • iFrame issues with anchor links.
    Clicking a link in a Playground instance that is embedded on a page in an iframe may trigger the main page to refresh, causing the instance to reset.
  • iFrame rendering issues.
    There are reports where setting the iframe’s src attribute to a blobbed URL instead of an HTTP URL breaks links to assets, including CSS and images.

How will you use WordPress Playground? WordPress Playground creator Adam Zieliński recently shipped a service that uses Playground to preview pull requests in GitHub. We all know that WordPress has never put a strong emphasis on developer experience (DX) the same way other technical stacks do, like static site generators and headless configurations. But this is exactly the sort of way that I imagine Playground improving DX to make developing for WordPress easier and, yes, fun.

References & Resources

]]>
hello@smashingmagazine.com (Ganesh Dahal)
<![CDATA[Addressing Accessibility Concerns With Using Fluid Type]]> https://smashingmagazine.com/2023/11/addressing-accessibility-concerns-fluid-type/ https://smashingmagazine.com/2023/11/addressing-accessibility-concerns-fluid-type/ Tue, 07 Nov 2023 18:00:00 GMT You may already be familiar with the CSS clamp() function. You may even be using it to fluidly scale a font size based on the browser viewport. Adrian Bece demonstrated the concept in another Smashing Magazine article just last year. It’s a clever CSS “trick” that has been floating around for a while.

But if you’ve used the clamp()-based fluid type technique yourself, then you may have also run into articles that offer a warning about it. For example, Adrian mentions this in his article:

“It’s important to reiterate that using rem values doesn’t automagically make fluid typography accessible for all users; it only allows the font sizes to respond to user font preferences. Using the CSS clamp function in combination with the viewport units to achieve fluid sizing introduces another set of drawbacks that we need to consider.”

Here’s Una Kravets with a few words about it on web.dev:

“Limiting how large text can get with max() or clamp() can cause a WCAG failure under 1.4.4 Resize text (AA), because a user may be unable to scale the text to 200% of its original size. Be certain to test the results with zoom.”

Trys Mudford also has something to say about it in the Utopia blog:

Adrian Roselli quite rightly warns that clamp can have a knock-on effect on the maximum font-size when the user explicitly sets a browser text zoom preference. As with any feature affecting typography, ensure you test thoroughly before using it in production.”

Mudford cites Adrian Roselli, who appears to be the core source of the other warnings:

“When you use vw units or limit how large text can get with clamp(), there is a chance a user may be unable to scale the text to 200% of its original size. If that happens, it is WCAG failure under 1.4.4 Resize text (AA) so be certain to test the results with zoom.”

So, what’s going on here? And how can we address any accessibility issues so we can keep fluidly scaling our text? That is exactly what I want to discuss in this article. Together, we will review what the WCAG guidelines say to understand the issue, then explore how we might be able to use clamp() in a way that adheres to WCAG Success Criterion (SC) 1.4.4.

WCAG Success Criterion 1.4.4

Let’s first review what WCAG Success Criterion 1.4.4 says about resizing text:

“Except for captions and images of text, text can be resized without assistive technology up to 200 percent without loss of content or functionality.”

Normally, if we’re setting CSS font-size to a non-fluid value, e.g., font-size: 2rem, we never have to worry about resizing behavior. All modern browsers can zoom up to 500% without additional assistive technology.

So, what’s the deal with sizing text with viewport units like this:

h1 {
  font-size: 5vw;
}

Here’s a simple example demonstrating the problem. I suggest viewing it in either Chrome or Firefox because zooming in Safari can behave differently.

The text only scales up to 55px, or 1.67 times its original size, even though we zoomed the entire page to five times its original size. And because WCAG SC 1.4.4 requires that text can scale to at least two times its original size, this simple example would fail an accessibility audit, at least in most browsers at certain viewport widths.

Surely this can’t be a problem for all clamped font sizes with vw units, right? What about one that only increases from 16px to 18px:

h1 {
  font-size: clamp(16px, 15.33px + 0.208vw, 18px);
}

The vw part of that inner calc() function (clamp() supports calc() without explicitly declaring it) is so small that it couldn’t possibly cause the same accessibility failure, right?

Sure enough, even though it doesn’t get to quite 500% of its original size when the page is zoomed to 500%, the size of the text certainly passes the 200% zoom specified in WCAG SC 1.4.4.

So, clamped viewport-based font sizes fail WCAG SC 1.4.4 in some cases but not in others. The only advice I’ve seen for determining which situations pass or fail is to check each of them manually, as Adrian Roselli originally suggested. But that’s time-consuming and imprecise because the functions don’t scale intuitively.

There must be some relationship between our inputs — i.e., the minimum font size, maximum font size, minimum breakpoint, and maximum breakpoint — that can help us determine when they pose accessibility issues.

Thinking Mathematically

If we think about this problem mathematically, we really want to ensure that z₅(v) ≥ 2z₁(v). Let’s break that down.

z₁(v) and z₅(v) are functions that take the viewport width, v, as their input and return a font size at a 100% zoom level and a 500% zoom level, respectively. In other words, what we want to know is at what range of viewport widths will z₅(v) be less than 2×z₁(v), which represents the minimum size outlined in WCAG SC 1.4.4?

Using the first clamp() example we looked at that failed WCAG SC 1.4.4, we know that the z₁ function is the clamp() expression:

z₁(v) = clamp(16, 5.33 + 0.0333v, 48)

Notice: The vw units are divided by 100 to translate from CSS where 100vw equals the viewport width in pixels.

As for the z₅ function, it’s tempting to think that z₅ = 5z₁. But remember what we learned from that first demo: viewport-based units don’t scale up with the browser’s zoom level. This means z₅ is more correctly expressed like this:

z₅(v) = clamp(16*5, 5.33*5 + 0.0333v, 48*5)

Notice: This scales everything up by 5 (or 500%), except for v. This simulates how the browser scales the page when zooming.

Let’s represent the clamp() function mathematically. We can convert it to a piecewise function, meaning z₁(v) and z₅(v) would ultimately look like the following figure:

We can graph these functions to help visualize the problem. Here’s the base function, z₁(v), with the viewport width, v, on the x-axis:

This looks about right. The font size stays at 16px until the viewport is 320px wide, and it increases linearly from there before it hits 48px at a viewport width of 1280px. So far, so good.

Here’s a more interesting graph comparing 2z₁(v) and z₅(v):

Can you spot the accessibility failure on this graph? When z₅(v) (in green) is less than 2z₁(v) (in teal), the viewport-based font size fails WCAG SC 1.4.4.

Let’s zoom into the bottom-left region for a closer look:

This figure indicates that failure occurs when the browser width is approximately between 1050px and 2100px. You can verify this by opening the original demo again and zooming into it at different viewport widths. When the viewport is less than 1050px or greater than 2100px, the text should scale up to at least two times its original size at a 500% zoom. But when it’s in between 1050px and 2100px, it doesn’t.

Hint: We have to manually measure the text — e.g., take a screenshot — because browsers don’t show zoomed values in DevTools.

General Solutions

For simplicity’s sake, we’ve only focused on one clamp() expression so far. Can we generalize these findings somehow to ensure any clamped expression passes WCAG SC 1.4.4?

Let’s take a closer look at what’s happening in the failure above. Notice that the problem is caused because 2z₁(v) — the SC 1.4.4 requirement — reaches its peak before z₅(v) starts increasing.

When would that be the case? Everything in 2z₁(v) is scaled by 200%, including the slope of the line (v). The function reaches its peak value at the same viewport width where z₁(v) reaches its peak value (the maximum 1280px breakpoint). That peak value is two times the maximum font size we want which, in this case, is 2*48, or 96px.

However, the slope of z₅(v) is the same as z₁(v). In other words, the function doesn’t start increasing from its lowest clamped point — five times the minimum font size we want — until the viewport width is five times the minimum breakpoint. In this case, that is 5*320, or 1600px.

Thinking about this generally, we can say that if 2z₁(v) peaks before z₅(v) starts increasing, or if the maximum breakpoint is less than five times the minimum breakpoint, then the peak value of 2z₁(v) must be less than or equal to the peak value of z₅(v), or two times the maximum value that is less than or equal to five times the minimum value.

Or simpler still: The maximum value must be less than or equal to 2.5 times the minimum value.

What about when the maximum breakpoint is more than five times the minimum breakpoint? Let’s see what our graph looks like when we change the maximum breakpoint from 1280px to 1664px and the maximum font size to 40px:

Technically, we could get away with a slightly higher maximum font size. To figure out just how much higher, we’d have to solve for z₅(v) ≥ 2z₁(v) at the point when 2z₁(v) reaches its peak, which is when v equals the maximum breakpoint. (Hat tip to my brother, Zach Barvian, whose excellent math skills helped me with this.)

To save you the math, you can play around with this calculator to see which combinations pass WCAG SC 1.4.4.

Conclusion

Summing up what we’ve covered:

  • If the maximum font size is less than or equal to 2.5 times the minimum font size, then the text will always pass WCAG SC 1.4.4, at least on all modern browsers.
  • If the maximum breakpoint is greater than five times the minimum breakpoint, it is possible to get away with a slightly higher maximum font size. That said, the increase is negligible, and that is a large breakpoint range to use in practice.

Importantly, that first rule is true for non-fluid responsive type as well. If you open this pen, for example, notice that it uses regular media queries to increase the h1 element’s size from an initial value of 1rem to 3rem (which violates our first rule), with an in-between stop for 2rem.

If you zoom in at 500% with a browser width of approximately 1000px, you will see that the text doesn’t reach 200% of its initial size. This makes sense because if you were to describe 2z₁(v) and z₅(v) mathematically, they would be even simpler piecewise functions with the same maximum and minimum limitations. This guideline would hold for any function describing a font size with a known minimum and maximum.

In the future, of course, we may get more tools from browsers to address these issues and accommodate even larger maximum font sizes. In the meantime, though, I hope you find this article helpful when building responsive frontends.

]]>
hello@smashingmagazine.com (Maxwell Barvian)
<![CDATA[In Search Of The Ideal Privacy Icon]]> https://smashingmagazine.com/2023/11/search-ideal-privacy-icon/ https://smashingmagazine.com/2023/11/search-ideal-privacy-icon/ Thu, 02 Nov 2023 15:00:00 GMT I’ve been on the lookout for a privacy icon and thought I’d bring you along that journey. This project I’ve been working on calls for one, but, honestly, nothing really screams “this means privacy” to me. I did what many of us do when we need inspiration for icons and searched The Noun Project, and perhaps you’ll see exactly what I mean with a small sample of what I found.

Padlocks, keys, shields, and unsighted eyeballs. There’s a lot of ambiguity here, at best, and certainly no consensus on how to convey “privacy” visually. Any of these could mean several different things. For instance, the eyeball with a line through it is something I often see associated with visibility (or lack thereof), such as hiding and showing a password in an account login context.

So, that is the journey I am on. Let’s poke at some of the existing options of icons that exist for communicating privacy to see what works and what doesn’t. Maybe you’ll like one of the symbols we’ll stumble across. Or maybe you’re simply curious how I — or someone else — approach a design challenge like this and where the exploration goes.

Is A Specific Icon Even Necessary?

There are a couple of solid points to be made about whether we need a less ambiguous icon for privacy or if an icon is even needed in the first place.

For example, it’s fair to say that the content surrounding the icon will clarify the meaning. Sure, an eyeball with a line through it can mean several things, but if there’s a “Privacy” label next to it, then does any of this really matter? I think so.

Visuals enhance content, and if we have one that is not fully aligned with the context in which it is used, we’re actually subtracting from the content rather than adding to it.

In other words, I believe the visual should bolster the content, not the other way around.

Another fair point: text labels are effective on their own and do not need to be enhanced.

I remember a post that Thomas Byttebier wrote back in 2015 that makes this exact case. The clincher is the final paragraph:

“I hope all of this made clear that icons can easily break the most important characteristic of a good user interface: clarity. So be very careful, and test! And when in doubt, always remember this: the best icon is a text label.”

— Thomas Byttebier

The Nielsen Norman Group also reminds us that a user’s understanding of icons is based on their past experiences. It goes on to say that universally recognized icons are rare and likely exceptions to the rule:

“[…] Most icons continue to be ambiguous to users due to their association with different meanings across various interfaces. This absence of a standard hurts the adoption of an icon over time, as users cannot rely on it having the same functionality every time it is encountered.”

That article also makes several points in support of using icons, so it’s not like a black-and-white or a one-size-fits-all sort of rule we’re subject to. But it does bring us to our next point.

Communicating “Privacy”

Let’s acknowledge off the bat that “privacy” is a convoluted term and that there is a degree of subjectivity when it comes to interpreting words and visuals. There may be more than one right answer or even different answers depending on the specific context you’re solving for.

In my particular case, the project is calling for a visual for situations when the user’s account is set to “private,” allowing them to be excluded from public-facing interfaces, like a directory of users. It is pretty close to the idea of the eyeball icons in that the user is hidden from view. So, while I can certainly see an argument made in favor of eyeballs with lines through them, there’s still some cognitive reasoning needed to differentiate it from other use cases, like the password protection example we looked at.

The problem is that there is no ironclad standard for how to represent privacy. What I want is something that is as universally recognized as the icons we typically see in a browser’s toolbar. There’s little if any, confusion about what happens when clicking on the Home icon in your browser. It’s the same deal with Refresh (arrow with a circular tail), Search (magnifying glass), and Print (printer).

In a world with so many icon repositories, emoji, and illustrations, how is it that there is nothing specifically defined for something as essential on the internet as privacy?

If there’s no accord over an icon, then we’ll just have to use our best judgement. Before we look at specific options that are available in the wild, let’s take a moment to define what we even mean when talking about “privacy.” A quick define: privacy in DuckDuckGo produces a few meanings pulled from The American Heritage Dictionary:

  1. The quality or condition of being secluded from the presence or view of others. “I need some privacy to change into my bathing suit.”
  2. The state of being free from public attention or unsanctioned intrusion. “A person’s right to privacy.”
  3. A state of being private, or in retirement from the company or from the knowledge or observation of others; seclusion.

Those first two definitions are a good point of reference. It’s about being out of public view to the extent that there’s a sense of freedom to move about without intrusion from other people. We can keep this in mind as we hunt for icons.

The Padlock Icon

We’re going to start with the icon I most commonly encounter when searching for something related to privacy: the padlock.

If I were to end my search right this moment and go with whatever’s out there for the icon, I’d grab the padlock. The padlock is good. It’s old, well-established, and quickly recognizable. That said, the reason I want to look beyond the lock is because it represents way too many things but is most widely associated with security and protection. It suggests that someone is locked up or locked out and that all it takes is a key to undo it. There’s nothing closely related to the definitions we’re working with, like seclusion and freedom. It’s more about confinement and being on the outside, looking in.

Relatively speaking, modern online privacy is a recent idea and an umbrella term. It’s not the same as locking up a file or application. In fact, we may not lock something at all and still can claim it is private. Take, for instance, an end-to-end encrypted chat message; it’s not locked with a user password or anything like that. It’s merely secluded from public view, allowing the participants to freely converse with one another.

I need a privacy symbol that doesn’t tie itself to password protection alone. Privacy is not a locked door or window but a closed one. It is not a chained gate but a tall hedge. I’m sure you get the gist.

But like I said before, a padlock is fairly reliable, and if nothing else works out, I’d gladly use it in spite of its closer alignment with security because it is so recognizable.

The Detective Icon

When searching “private” in an emoji picker, a detective is one of the options that come up. Get it, like a “private” detective or “private” eye?

I have mixed feelings about using a detective to convey privacy. One thing I love about it is that “private” is in the descriptor. It’s actually what Chrome uses for its private browsing, or “Incognito” mode.

I knew what this meant when I first saw it. There’s a level of privacy represented here. It’s essentially someone who doesn’t want to be recognized and is obscuring their identity.

My mixed emotions are for a few reasons. First off, why is it that those who have to protect their privacy are the ones who need to look like they are spying on others and cover themselves with hats, sunglasses, and coats? Secondly, the detective is not minimal enough; there is a lot of detail to take in.

When we consider a pictograph, we can’t just consider it in a standalone context. It has to go well with the others in a group setting. Although the detective’s face doesn’t stand out much, it is not as minimal as the others, and that can lead to too many derivatives.

A very minimal icon, like the now-classic (it wasn’t always the case) hamburger menu, gives less leeway for customization, which, in turn, protects that icon from being cosmetically changed into something that it’s not. What if somebody makes a variation of the detective, giving him a straw hat and a Hawaiian shirt? He would look more like a tourist hiding from the sun than someone who’s incognito. Yes, both can be true at the same time, but I don’t want to give him that much credit.

That said, I’ll definitely consider this icon if I were to put together a set of ornate pictographs to be used in an application. This one would be right at home in that context.

The Zorro Mask Icon

I was going to call it an eye mask, but that gives me a mental picture of people sleeping in airplanes. That term is taken. With some online searching, I found the formal name for this Zorro-esque accessory is called a domino mask.

I’m going with the Zorro mask.

I like this icon for two reasons: It’s minimal, and it’s decipherable. It’s like a classy version of the detective, as in it’s not a full-on cover-up. It appears less “shady,” so to speak.

But does the Zorro mask unambiguously mean “privacy”? Although it does distinguish itself from the full-face mask icon that usually represents drama and acting (🎭), its association with theater is not totally non-existent. Mask-related icons have long been the adopted visual for conveying theater. The gap in meaning between privacy and theater is so great that there’s too much room for confusion and for it to appear out of context.

It does, however, have potential. If every designer were to begin employing the Zorro mask to represent privacy in interfaces, then users would learn to associate the mask with privacy just as effectively as a magnifying glass icon is to search.

In the end, though, this journey is not about me trying to guess what works in a perfect world but me in search of the “perfect” privacy pictograph available right now, and I don’t feel like it’s ended with the Zorro mask.

The Shield Icon

No. Just no.

Here’s why. The shield, just like the lock, is exceptionally well established as a visual for antivirus software or any defense against malicious software. It works extremely well in that context. Any security-related application can proudly don a shield to establish trust in the app’s ability to defend against attacks.

Again, there is no association with “secluded from public view” or “freedom from intrusion” here. Privacy can certainly be a form of defense, but given the other options we’ve seen so far, a shield is not the strongest association we can find.

Some New Ideas

If we’re striking out with existing icons, then we might consider conceiving our own! It doesn’t hurt to consider new options. I have a few ideas with varying degrees of effectiveness.

The Blurred User Icon

The idea is that a user is sitting behind some sort of satin texture or frosted glass. That could be a pretty sleek visual for someone who is unrecognizable and able to move about freely without intrusion.

I like the subtlety of this concept. The challenge, though, is two-fold:

  1. The blurriness could get lost, or worse, distorted, when the icon is applied at a small size.
  2. Similarly, it might look like a poor, improperly formatted image file that came out pixelated.

This idea has promise, for sure, but clearly (pun intended), not without shortcomings.

The Venetian Blind Icon

I can also imagine how a set of slatted blinds could be an effective visual for privacy. It blocks things out of view, but not in an act of defense, like the shield, or a locked encasing, such as the padlock.

Another thing I really like about this direction is that it communicates the ability to toggle privacy as a setting. Want privacy? Close the blinds and walk freely about your house. Want guests? Lift the blinds and welcome in the daylight!

At the same time, I feel like my attempt or execution suffers from the same fate as the detective icon. While I love the immediate association with privacy, it offers too much visual detail that could easily get lost in translation at a smaller size, just as it does with the detective.

The Picket Fence Icon

We’ve likened privacy to someone being positioned behind a hedge, so what if we riff on that and attempt something similar: a fence?

I like this one. For me, it fits the purpose just as well and effectively as the Zorro mask, perhaps better. It’s something that separates (or secludes) two distinct areas that prevent folks from looking in or hopping over. This is definitely a form of privacy.

Thinking back to The Norman Nielsen Group’s assertion that universally recognized icons are a rarity, the only issue I see with the fence is that it is not a well-established symbol. I remember seeing an icon of a castle wall years ago, but I have never seen a fence used in a user interface. So, it would take some conditioning for the fence to make that association.

So, Which One Should I Use?

We’ve looked at quite a few options! It’s not like we’ve totally exhausted our options, either, but we’ve certainly touched on a number of possibilities while considering some new ideas. I really wish there was some instantly recognizable visual that screams “privacy” at any size, whether it’s the largest visual in the interface or a tiny 30px×30px icon. Instead, I feel like everything falls somewhere in the middle of a wide spectrum.

Here’s the spoiler: I chose the Zorro mask. And I chose it for all the reasons we discussed earlier. It’s recognizable, is closely associated with “masking” an identity, and conveys that a user is freely able to move about without intrusion. Is it perfect? No. But I think it’s the best fit given the options we’ve considered.

Deep down, I really wanted to choose the fence icon. It’s the perfect metaphor for privacy, which is an instantly recognizable part of everyday life. But as something that is a new idea and that isn’t in widespread use, I feel it would take more cognitive load to make out what it is conveying than it’s worth — at least for now.

And if neither the Zorro mask nor the fence fit for a given purpose, I’m most likely to choose a pictograph of the exact feature used to provide privacy: encryption, selective visibility, or biometrics. Like, if there’s a set of privacy-related features that needs to be communicated for a product — perhaps for a password manager or the like — it might be beneficial to include a set of icons that can represent those features collectively.

An absolutely perfect pictograph is something that’s identifiable to any user, regardless of past experiences or even the language they speak.

Do you know how the “OK” hand sign (👌) is universally understood as a good thing, or how you know how to spot the food court in an airport with a fork and knife icon? That would be the ideal situation. Yet, for contemporary notions, like online privacy, that sort of intuitiveness is more of a luxury.

But with consistency and careful consideration, we can adopt new ideas and help users understand the visual over time. It has to reach a point where the icon is properly enhancing the content rather than the other way around, and that takes a level of commitment and execution that doesn’t happen overnight.

What do you think about my choice? This is merely how I’ve approached the challenge. I shared my thought process and the considerations that influenced my decisions. How would you have approached it, and what would you have decided in the end?

]]>
hello@smashingmagazine.com (Preethi Sam)
<![CDATA[Answering Common Questions About Interpreting Page Speed Reports]]> https://smashingmagazine.com/2023/10/answering-questions-interpreting-page-speed-reports/ https://smashingmagazine.com/2023/10/answering-questions-interpreting-page-speed-reports/ Tue, 31 Oct 2023 16:00:00 GMT This article is a sponsored by DebugBear

Running a performance check on your site isn’t too terribly difficult. It may even be something you do regularly with Lighthouse in Chrome DevTools, where testing is freely available and produces a very attractive-looking report.

Lighthouse is only one performance auditing tool out of many. The convenience of having it tucked into Chrome DevTools is what makes it an easy go-to for many developers.

But do you know how Lighthouse calculates performance metrics like First Contentful Paint (FCP), Total Blocking Time (TBT), and Cumulative Layout Shift (CLS)? There’s a handy calculator linked up in the report summary that lets you adjust performance values to see how they impact the overall score. Still, there’s nothing in there to tell us about the data Lighthouse is using to evaluate metrics. The linked-up explainer provides more details, from how scores are weighted to why scores may fluctuate between test runs.

Why do we need Lighthouse at all when Google also offers similar reports in PageSpeed Insights (PSI)? The truth is that the two tools were fairly distinct until PSI was updated in 2018 to use Lighthouse reporting.

Did you notice that the Performance score in Lighthouse is different from that PSI screenshot? How can one report result in a near-perfect score while the other appears to find more reasons to lower the score? Shouldn’t they be the same if both reports rely on the same underlying tooling to generate scores?

That’s what this article is about. Different tools make different assumptions using different data, whether we are talking about Lighthouse, PageSpeed Insights, or commercial services like DebugBear. That’s what accounts for different results. But there are more specific reasons for the divergence.

Let’s dig into those by answering a set of common questions that pop up during performance audits.

What Does It Mean When PageSpeed Insights Says It Uses “Real-User Experience Data”?

This is a great question because it provides a lot of context for why it’s possible to get varying results from different performance auditing tools. In fact, when we say “real user data,” we’re really referring to two different types of data. And when discussing the two types of data, we’re actually talking about what is called real-user monitoring, or RUM for short.

Type 1: Chrome User Experience Report (CrUX)

What PSI means by “real-user experience data” is that it evaluates the performance data used to measure the core web vitals from your tests against the core web vitals data of actual real-life users. That real-life data is pulled from the Chrome User Experience (CrUX) report, a set of anonymized data collected from Chrome users — at least those who have consented to share data.

CrUX data is important because it is how web core vitals are measured, which, in turn, are a ranking factor for Google’s search results. Google focuses on the 75th percentile of users in the CrUX data when reporting core web vitals metrics. This way, the data represents a vast majority of users while minimizing the possibility of outlier experiences.

But it comes with caveats. For example, the data is pretty slow to update, refreshing every 28 days, meaning it is not the same as real-time monitoring. At the same time, if you plan on using the data yourself, you may find yourself limited to reporting within that floating 28-day range unless you make use of the CrUX History API or BigQuery to produce historical results you can measure against. CrUX is what fuels PSI and Google Search Console, but it is also available in other tools you may already use.

Barry Pollard, a web performance developer advocate for Chrome, wrote an excellent primer on the CrUX Report for Smashing Magazine.

Type 2: Full Real-User Monitoring (RUM)

If CrUX offers one flavor of real-user data, then we can consider “full real-user data” to be another flavor that provides even more in the way individual experiences, such as specific network requests made by the page. This data is distinct from CrUX because it’s collected directly by the website owner by installing an analytics snippet on their website.

Unlike CrUX data, full RUM pulls data from other users using other browsers in addition to Chrome and does so on a continual basis. That means there’s no waiting 28 days for a fresh set of data to see the impact of any changes made to a site.

You can see how you might wind up with different results in performance tests simply by the type of real-user monitoring (RUM) that is in use. Both types are useful, but

You might find that CrUX-based results are excellent for more of a current high-level view of performance than they are an accurate reflection of the users on your site because of that 28-day waiting period, which is where full RUM shines with more immediate results and a greater depth of information.

Does Lighthouse Use RUM Data, Too?

It does not! It uses synthetic data, or what we commonly call lab data. And, just like RUM, we can explain the concept of lab data by breaking it up into two different types.

Type 1: Observed Data

Observed data is performance as the browser sees it. So, instead monitoring real information collected from real users, observed data is more like defining the test conditions ourselves. For example, we could add throttling to the test environment to enforce an artificial condition where the test opens the page on a slower connection. You might think of it like racing a car in virtual reality, where the conditions are decided in advance, rather than racing on a live track where conditions may vary.

Type 2: Simulated Data

While we called that last type of data “observed data,” that is not an official industry term or anything. It’s more of a necessary label to help distinguish it from simulated data, which describes how Lighthouse (and many other tools that include Lighthouse in its feature set, such as PSI) applies throttling to a test environment and the results it produces.

The reason for the distinction is that there are different ways to throttle a network for testing. Simulated throttling starts by collecting data on a fast internet connection, then estimates how quickly the page would have loaded on a different connection. The result is a much faster test than it would be to apply throttling before collecting information. Lighthouse can often grab the results and calculate its estimates faster than the time it would take to gather the information and parse it on an artificially slower connection.

Simulated And Observed Data In Lighthouse

Simulated data is the data that Lighthouse uses by default for performance reporting. It’s also what PageSpeed Insights uses since it is powered by Lighthouse under the hood, although PageSpeed Insights also relies on real-user experience data from the CrUX report.

However, it is also possible to collect observed data with Lighthouse. This data is more reliable since it doesn’t depend on an incomplete simulation of Chrome internals and the network stack. The accuracy of observed data depends on how the test environment is set up. If throttling is applied at the operating system level, then the metrics match what a real user with those network conditions would experience. DevTools throttling is easier to set up, but doesn’t accurately reflect how server connections work on the network.

Limitations Of Lab Data

Lab data is fundamentally limited by the fact that it only looks at a single experience in a pre-defined environment. This environment often doesn’t even match the average real user on the website, who may have a faster network connection or a slower CPU. Continuous real-user monitoring can actually tell you how users are experiencing your website and whether it’s fast enough.

So why use lab data at all?

The biggest advantage of lab data is that it produces much more in-depth data than real user monitoring.

Google CrUX data only reports metric values with no debug data telling you how to improve your metrics. In contrast, lab reports contain a lot of analysis and recommendations on how to improve your page speed.

Why Is My Lighthouse LCP Score Worse Than The Real User Data?

It’s a little easier to explain different scores now that we’re familiar with the different types of data used by performance auditing tools. We now know that Google reports on the 75th percentile of real users when reporting web core vitals, which includes LCP.

“By using the 75th percentile, we know that most visits to the site (3 of 4) experienced the target level of performance or better. Additionally, the 75th percentile value is less likely to be affected by outliers. Returning to our example, for a site with 100 visits, 25 of those visits would need to report large outlier samples for the value at the 75th percentile to be affected by outliers. While 25 of 100 samples being outliers is possible, it is much less likely than for the 95th percentile case.”

Brian McQuade

On the flip side, simulated data from Lighthouse neither reports on real users nor accounts for outlier experiences in the same way that CrUX does. So, if we were to set heavy throttling on the CPU or network of a test environment in Lighthouse, we’re actually embracing outlier experiences that CrUX might otherwise toss out. Because Lighthouse applies heavy throttling by default, the result is that we get a worse LCP score in Lighthouse than we do PSI simply because Lighthouse’s data effectively looks at a slow outlier experience.

Why Is My Lighthouse CLS Score Better Than The Real User Data?

Just so we’re on the same page, Cumulative Layout Shift (CLS) measures the “visible stability” of a page layout. If you’ve ever visited a page, scrolled down it a bit before the page has fully loaded, and then noticed that your place on the page shifts when the page load is complete, then you know exactly what CLS is and how it feels.

The nuance here has to do with page interactions. We know that real users are capable of interacting with a page even before it has fully loaded. This is a big deal when measuring CLS because layout shifts often occur lower on the page after a user has scrolled down the page. CrUX data is ideal here because it’s based on real users who would do such a thing and bear the worst effects of CLS.

Lighthouse’s simulated data, meanwhile, does no such thing. It waits patiently for the full page load and never interacts with parts of the page. It doesn’t scroll, click, tap, hover, or interact in any way.

This is why you’re more likely to receive a lower CLS score in a PSI report than you’d get in Lighthouse. It’s not that PSI likes you less, but that the real users in its report are a better reflection of how users interact with a page and are more likely to experience CLS than simulated lab data.

Why Is Interaction to Next Paint Missing In My Lighthouse Report?

This is another case where it’s helpful to know the different types of data used in different tools and how that data interacts — or not — with the page. That’s because the Interaction to Next Paint (INP) metric is all about interactions. It’s right there in the name!

The fact that Lighthouse’s simulated lab data does not interact with the page is a dealbreaker for an INP report. INP is a measure of the latency for all interactions on a given page, where the highest latency — or close to it — informs the final score. For example, if a user clicks on an accordion panel and it takes longer for the content in the panel to render than any other interaction on the page, that is what gets used to evaluate INP.

So, when INP becomes an official core web vitals metric in March 2024, and you notice that it’s not showing up in your Lighthouse report, you’ll know exactly why it isn’t there.

Note: It is possible to script user flows with Lighthouse, including in DevTools. But that probably goes too deep for this article.

Why Is My Time To First Byte Score Worse For Real Users?

The Time to First Byte (TTFB) is what immediately comes to mind for many of us when thinking about page speed performance. We’re talking about the time between establishing a server connection and receiving the first byte of data to render a page.

TTFB identifies how fast or slow a web server is to respond to requests. What makes it special in the context of core web vitals — even though it is not considered a core web vital itself — is that it precedes all other metrics. The web server needs to establish a connection in order to receive the first byte of data and render everything else that core web vitals metrics measure. TTFB is essentially an indication of how fast users can navigate, and core web vitals can’t happen without it.

You might already see where this is going. When we start talking about server connections, there are going to be differences between the way that RUM data observes the TTFB versus how lab data approaches it. As a result, we’re bound to get different scores based on which performance tools we’re using and in which environment they are. As such, TTFB is more of a “rough guide,” as Jeremy Wagner and Barry Pollard explain:

“Websites vary in how they deliver content. A low TTFB is crucial for getting markup out to the client as soon as possible. However, if a website delivers the initial markup quickly, but that markup then requires JavaScript to populate it with meaningful content […], then achieving the lowest possible TTFB is especially important so that the client-rendering of markup can occur sooner. […] This is why the TTFB thresholds are a “rough guide” and will need to be weighed against how your site delivers its core content.”

Jeremy Wagner and Barry Pollard

So, if your TTFB score comes in higher when using a tool that relies on RUM data than the score you receive from Lighthouse’s lab data, it’s probably because of caches being hit when testing a particular page. Or perhaps the real user is coming in from a shortened URL that redirects them before connecting to the server. It’s even possible that a real user is connecting from a place that is really far from your web server, which takes a little extra time, particularly if you’re not using a CDN or running edge functions. It really depends on both the user and how you serve data.

Why Do Different Tools Report Different Core Web Vitals? What Values Are Correct?

This article has already introduced some of the nuances involved when collecting web vitals data. Different tools and data sources often report different metric values. So which ones can you trust?

When working with lab data, I suggest preferring observed data over simulated data. But you’ll see differences even between tools that all deliver high-quality data. That’s because no two tests are the same, with different test locations, CPU speeds, or Chrome versions. There’s no one right value. Instead, you can use the lab data to identify optimizations and see how your website changes over time when tested in a consistent environment.

Ultimately, what you want to look at is how real users experience your website. From an SEO standpoint, the 28-day Google CrUX data is the gold standard. However, it won’t be accurate if you’ve rolled out performance improvements over the last few weeks. Google also doesn’t report CrUX data for some high-traffic pages because the visitors may not be logged in to their Google profile.

Installing a custom RUM solution on your website can solve that issue, but the numbers won’t match CrUX exactly. That’s because visitors using browsers other than Chrome are now included, as are users with Chrome analytics reporting disabled.

Finally, while Google focuses on the fastest 75% of experiences, that doesn’t mean the 75th percentile is the correct number to look at. Even with good core web vitals, 25% of visitors may still have a slow experience on your website.

Wrapping Up

This has been a close look at how different performance tools audit and report on performance metrics, such as core web vitals. Different tools rely on different types of data that are capable of producing different results when measuring different performance metrics.

So, if you find yourself with a CLS score in Lighthouse that is far lower than what you get in PSI or DebugBear, go with the Lighthouse report because it makes you look better to the big boss. Just kidding! That difference is a big clue that the data between the two tools is uneven, and you can use that information to help diagnose and fix performance issues.

Are you looking for a tool to track lab data, Google CrUX data, and full real-user monitoring data? DebugBear helps you keep track of all three types of data in one place and optimize your page speed where it counts.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[Tales Of November (2023 Wallpapers Edition)]]> https://smashingmagazine.com/2023/10/desktop-wallpaper-calendars-november-2023/ https://smashingmagazine.com/2023/10/desktop-wallpaper-calendars-november-2023/ Tue, 31 Oct 2023 09:45:00 GMT November tends to be rather gray in many parts of the world. So what better remedy could there be as some colorful inspiration? To bring some good vibes to your desktops and home screens, artists and designers from across the globe once again tickled their creative ideas and designed beautiful and inspiring wallpapers to welcome the new month.

The wallpapers in this collection all come in versions with and without a calendar for November 2023 and can be downloaded for free. And since so many unique designs have seen the light of day in the more than twelve years that we’ve been running this monthly wallpapers series, we also compiled a selection of November favorites from our archives at the end of the post. Maybe you’ll spot one of your almost-forgotten favorites in there, too? A big thank you to everyone who shared their designs with us this month — this post wouldn’t exist without you. Happy November!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.
Transition

“Inspired by the transition from autumn to winter.” — Designed by Tecxology from India.

Ghostly Gala

Designed by Bhabna Basak from India.

Journey Through November

“Step into the embrace of November’s beauty. On this National Hiking Day, let every trail lead you to a new discovery and every horizon remind you of nature’s wonders. Lace up, venture out, and celebrate the great outdoors.” — Designed by PopArt Studio from Serbia.

Bug

Designed by Ricardo Gimenes from Sweden.

Sunset Or Sunrise

“November is autumn in all its splendor. Earthy colors, falling leaves and afternoons in the warmth of the home. But it is also adventurous and exciting and why not, different. We sit in Bali contemplating Pura Ulun Danu Bratan. We don’t know if it’s sunset or dusk, but… does that really matter?” — Designed by Veronica Valenzuela Jimenez from Spain.

Harvesting A New Future

“Our team takes pride in aligning our volunteer initiatives with the 2030 Agenda for Sustainable Development’s ‘Zero Hunger’ goal. This goal reflects a global commitment to addressing food-related challenges comprehensively and sustainably, aiming to end hunger, ensure food security, improve nutrition, and promote sustainable agriculture. We encourage our team members to volunteer with non-profits they care about year-round. Explore local opportunities and use your skills to make a meaningful impact!” — Designed by Jenna Miller from Portland, OR.

Behavior Analysis

Designed by Ricardo Gimenes from Sweden.

Oldies But Goodies

Some things are just too good to be forgotten, so below you’ll find a selection of oldies but goodies from our wallpapers archives. Please note that these designs don’t come with a calendar.

Anbani

Anbani means alphabet in Georgian. The letters that grow on that tree are the Georgian alphabet. It’s very unique!” — Designed by Vlad Gerasimov from Georgia.

Cozy Autumn Cups And Cute Pumpkins

“Autumn coziness, which is created by fallen leaves, pumpkins, and cups of cocoa, inspired our designers for this wallpaper. — Designed by MasterBundles from Ukraine.

A Jelly November

“Been looking for a mysterious, gloomy, yet beautiful desktop wallpaper for this winter season? We’ve got you, as this month’s calendar marks Jellyfish Day. On November 3rd, we celebrate these unique, bewildering, and stunning marine animals. Besides adorning your screen, we’ve got you covered with some jellyfish fun facts: they aren’t really fish, they need very little oxygen, eat a broad diet, and shrink in size when food is scarce. Now that’s some tenacity to look up to.” — Designed by PopArt Studio from Serbia.

Colorful Autumn

“Autumn can be dreary, especially in November, when rain starts pouring every day. We wanted to summon better days, so that’s how this colourful November calendar was created. Open your umbrella and let’s roll!” — Designed by PopArt Studio from Serbia.

The Kind Soul

“Kindness drives humanity. Be kind. Be humble. Be humane. Be the best of yourself!” — Designed by Color Mean Creative Studio from Dubai.

Time To Give Thanks

Designed by Glynnis Owen from Australia.

Moonlight Bats

“I designed some Halloween characters and then this idea came to my mind — a bat family hanging around in the moonlight. A cute and scary mood is just perfect for autumn.” — Designed by Carmen Eisendle from Germany.

Outer Space

“We were inspired by the nature around us and the universe above us, so we created an out-of-this-world calendar. Now, let us all stop for a second and contemplate on preserving our forests, let us send birds of passage off to warmer places, and let us think to ourselves — if not on Earth, could we find a home somewhere else in outer space?” — Designed by PopArt Studio from Serbia.

Winter Is Here

Designed by Ricardo Gimenes from Sweden.

Go To Japan

“November is the perfect month to go to Japan. Autumn is beautiful with its brown colors. Let’s enjoy it!” — Designed by Veronica Valenzuela from Spain.

International Civil Aviation Day

“On December 7, we mark International Civil Aviation Day, celebrating those who prove day by day that the sky really is the limit. As the engine of global connectivity, civil aviation is now, more than ever, a symbol of social and economic progress and a vehicle of international understanding. This monthly calendar is our sign of gratitude to those who dedicate their lives to enabling everyone to reach their dreams.” — Designed by PopArt Studio from Serbia.

Tempestuous November

“By the end of autumn, ferocious Poseidon will part from tinted clouds and timid breeze. After this uneven clash, the sky once more becomes pellucid just in time for imminent luminous snow.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Peanut Butter Jelly Time!

“November is the Peanut Butter Month so I decided to make a wallpaper around that. As everyone knows peanut butter goes really well with some jelly so I made two sandwiches, one with peanut butter and one with jelly. Together they make the best combination. I also think peanut butter tastes pretty good so that’s why I chose this for my wallpaper.” — Designed by Senne Mommens from Belgium.

On The Edge Of Forever

“November has always reminded me of the famous Guns N’ Roses song, so I’ve decided to look at its meaning from a different perspective. The story in my picture takes place somewhere in space, where a young guy beholds a majestic meteor shower and wonders about the mysteries of the universe.” — Designed by Aliona Voitenko from Ukraine.

Me And The Key Three

Designed by Bart Bonte from Belgium.

Mushroom Season

“It is autumn! It is raining and thus… it is mushroom season! It is the perfect moment to go to the forest and get the best mushrooms to do the best recipe.” — Designed by Verónica Valenzuela from Spain.

Welcome Home Dear Winter

“The smell of winter is lingering in the air. The time to be home! Winter reminds us of good food, of the warmth, the touch of a friendly hand, and a talk beside the fire. Keep calm and let us welcome winter.” — Designed by Acodez IT Solutions from India.

A Gentleman’s November

Designed by Cedric Bloem from Belgium.

Sailing Sunwards

“There’s some pretty rough weather coming up these weeks. Thinking about November makes me want to keep all the warm thoughts in mind. I’d like to wish everyone a cozy winter.” — Designed by Emily Trbl. Kunstreich from Germany.

Hold On

“We have to acknowledge that some things are inevitable, like winter. Let’s try to hold on until we can, and then embrace the beautiful season.” — Designed by Igor Izhik from Canada.

Hello World, Happy November

“I often read messages at Smashing Magazine from the people in the southern hemisphere ‘it’s spring, not autumn!’ so I wanted to design a wallpaper for the northern and the southern hemispheres. Here it is, northerners and southerns, hope you like it!” — Designed by Agnes Swart from the Netherlands.

Snoop Dog

Designed by Ricardo Gimenes from Sweden.

No Shave Movember

“The goal of Movember is to ‘change the face of men’s health.’” — Designed by Suman Sil from India.

Deer Fall, I Love You

Designed by Maria Porter from the United States.

Autumn Choir

Designed by Hatchers from Ukraine / China.

Late Autumn

“The late arrival of Autumn.” Designed by Maria Castello Solbes from Spain.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Passkeys: A No-Frills Explainer On The Future Of Password-Less Authentication]]> https://smashingmagazine.com/2023/10/passkeys-explainer-future-password-less-authentication/ https://smashingmagazine.com/2023/10/passkeys-explainer-future-password-less-authentication/ Mon, 30 Oct 2023 10:00:00 GMT Passkeys are a new way of authenticating applications and websites. Instead of having to remember a password, a third-party service provider (e.g., Google or Apple) generates and stores a cryptographic key pair that is bound to a website domain. Since you have access to the service provider, you have access to the keys, which you can then use to log in.

This cryptographic key pair contains both private and public keys that are used for authenticating messages. These key pairs are often known as asymmetric or public key cryptography.

Public and private key pair? Asymmetric cryptography? Like most modern technology, passkeys are described by esoteric verbiage and acronyms that make them difficult to discuss. That’s the point of this article. I want to put the complex terms aside and help illustrate how passkeys work, explain what they are effective at, and demonstrate what it looks like to work with them.

How Passkeys Work

Passkeys are cryptographic keys that rely on generating signatures. A signature is proof that a message is authentic. How so? It happens first by hashing (a fancy term for “obscuring”) the message and then creating a signature from that hash with your private key. The private key in the cryptographic key pair allows the signature to be generated, and the public key, which is shared with others, allows the service to verify that the message did, in fact, come from you.

In short, passkeys consist of two keys: a public and private. One verifies a signature while the other verifies you, and the communication between them is what grants you access to an account.

Here’s a quick way of generating a signing and verification key pair to authenticate a message using the SubtleCrypto API. While this is only part of how passkeys work, it does illustrate how the concept works cryptographically underneath the specification.

const message = new TextEncoder().encode("My message");

const keypair = await crypto.subtle.generateKey(
  { name: "ECDSA", namedCurve: "P-256" },
  true,
  [ 'sign', 'verify' ]
);

const signature = await crypto.subtle.sign(
  { name: "ECDSA", hash: "SHA-256" },
  keypair.privateKey,
  message
);

// Normally, someone else would be doing the verification using your public key
// but it's a bit easier to see it yourself this way
console.log(
  "Did my private key sign this message?",
  await crypto.subtle.verify(
    { name: "ECDSA", hash: "SHA-256" },
    keypair.publicKey,
    signature,
    message
  )
);

Notice the three parts pulling all of this together:

  1. Message: A message is constructed.
  2. Key pair: The public and private keys are generated. One key is used for the signature, and the other is set to do the verification.
  3. Signature: A signature is signed by the private key, verifying the message’s authenticity.

From there, a third party would authenticate the private key with the public key, verifying the correct pair of keys or key pair. We’ll get into the weeds of how the keys are generated and used in just a bit, but for now, this is some context as we continue to understand why passkeys can potentially erase the need for passwords.

Why Passkeys Can Replace Passwords

Since the responsibility of storing passkeys is removed and transferred to a third-party service provider, you only have to control the “parent” account in order to authenticate and gain access. This is a lot like requiring single sign-on (SSO) for an account via Google, Facebook, or LinkedIn, but instead, we use an account that has control of the passkey stored for each individual website.

For example, I can use my Google account to store passkeys for somerandomwebsite.com. That allows me to prove a challenge by using that passkey’s private key and thus authenticate and log into somerandomwebsite.com.

For the non-tech savvy, this typically looks like a prompt that the user can click to log in. Since the credentials (i.e., username and password) are tied to the domain name (somerandomwebsite.com), and passkeys created for a domain name are only accessible to the user at login, the user can select which passkey they wish to use for access. This is usually only one login, but in some cases, you can create multiple logins for a single domain and then select which one you wish to use from there.

So, what’s the downside? Having to store additional cryptographic keys for each login and every site for which you have a passkey often requires more space than storing a password. However, I would argue that the security gains, the user experience from not having to remember a password, and the prevention of common phishing techniques more than offset the increased storage space.

How Passkeys Protect Us

Passkeys prevent a couple of security issues that are quite common, specifically leaked database credentials and phishing attacks.

Database Leaks

Have you ever shared a password with a friend or colleague by copying and pasting it for them in an email or text? That could lead to a security leak. So would a hack on a system that stores customer information, like passwords, which is then sold on dark marketplaces or made public. In many cases, it’s a weak set of credentials — like an email and password combination — that can be stolen with a fair amount of ease.

Passkeys technology circumvents this because passkeys only store a public key to an account, and as you may have guessed by the name, this key is expected to be made accessible to anyone who wants to use it. The public key is only used for verification purposes and, for the intended use case of passkeys, is effectively useless without the private key to go with it, as the two are generated as a pair. Therefore, those previous juicy database leaks are no longer useful, as they can no longer be used for cracking the password for your account. Cracking a similar private key would take millions of years at this point in time.

Phishing

Passwords rely on knowing what the password is for a given login: anyone with that same information has the same level of access to the same account as you do. There are sophisticated phishing sites that look like they’re by Microsoft or Google and will redirect you to the real provider after you attempt to log into their fake site. The damage is already done at that point; your credentials are captured, and hopefully, the same credentials weren’t being used on other sites, as now you’re compromised there as well.

A passkey, by contrast, is tied to a domain. You gain a new element of security: the fact that only you have the private key. Since the private key is not feasible to remember nor computationally easy to guess, we can guarantee that you are who you say we are (at least as long as your passkey provider is not compromised). So, that fake phishing site? It will not even show the passkey prompt because the domain is different, and thus completely mitigates phishing attempts.

There are, of course, theoretical attacks that can make passkeys vulnerable, like someone compromising your DNS server to send you to a domain that now points to their fake site. That said, you probably have deeper issues to concern yourself with if it gets to that point.

Implementing Passkeys

At a high level, a few items are needed to start using passkeys, at least for the common sign-up and log-in process. You’ll need a temporary cache of some sort, such as redis or memcache, for storing temporary challenges that users can authenticate against, as well as a more permanent data store for storing user accounts and their public key information, which can be used to authenticate the user over the course of their account lifetime. These aren’t hard requirements but rather what’s typical of what would be developed for this kind of authentication process.

To understand passkeys properly, though, we want to work through a couple of concepts. The first concept is what is actually taking place when we generate a passkey. How are passkeys generated, and what are the underlying cryptographic primitives that are being used? The second concept is how passkeys are used to verify information and why that information can be trusted.

Generating Passkeys

A passkey involves an authenticator to generate the key pair. The authenticator can either be hardware or software. For example, it can be a hardware security key, the operating system’s Trusted Platform Module (TPM), or some other application. In the cases of Android or iOS, we can use the device’s secure enclave.

To connect to an authenticator, we use what’s called the Client to Authenticator Protocol (CTAP). CTAP allows us to connect to hardware over different connections through the browser. For example, we can connect via CTAP using an NFC, Bluetooth, or a USB connection. This is useful in cases where we want to log in on one device while another device contains our passkeys, as is the case on some operating systems that do not support passkeys at the time of writing.

A passkey is built off another web API called WebAuthn. While the APIs are very similar, the WebAuthn API differs in that passkeys allow for cloud syncing of the cryptographic keys and do not require knowledge of whom the user is to log in, as that information is stored in a passkey with its Relying Party (RP) information. The two APIs otherwise share the same flows and cryptographic operations.

Storing Passkeys

Let’s look at an extremely high-level overview of how I’ve stored and kept track of passkeys in my demo repo. This is how the database is structured.

Basically, a users table has public_keys, which, in turn, contains information about the public key, as well as the public key itself.

From there, I’m caching certain information, including challenges to verify authenticity and data about the sessions in which the challenges take place.

Again, this is only a high-level look to give you a clearer idea of what information is stored and how it is stored.

Verifying Passkeys

There are several entities involved in passkey:

  1. The authenticator, which we previously mentioned, generates our key material.
  2. The client that triggers the passkey generation process via the navigator.credentials.create call.
  3. The Relying Party takes the resulting public key from that call and stores it to be used for subsequent verification.

In our case, you are the client and the Relying Party is the website server you are trying to sign up and log into. The authenticator can either be your mobile phone, a hardware key, or some other device capable of generating your cryptographic keys.

Passkeys are used in two phases: the attestation phase and the assertion phase. The attestation phase is likened to a registration that you perform when first signing up for a service. Instead of an email and password, we generate a passkey.

Assertion is similar to logging in to a service after we are registered, and instead of verifying with a username and password, we use the generated passkey to access the service.

Each phase initially requires a random challenge generated by the Relying Party, which is then signed by the authenticator before the client sends the signature back to the Relying Party to prove account ownership.

Browser API Usage

We’ll be looking at how the browser constructs and supplies information for passkeys so that you can store and utilize it for your login process. First, we’ll start with the attestation phase and then the assertion phase.

Attest To It

The following shows how to create a new passkey using the navigator.credentials.create API. From it, we receive an AuthenticatorAttestationResponse, and we want to send portions of that response to the Relying Party for storage.

const { challenge } = await (await fetch("/attestation/generate")).json(); // Server call mock to get a random challenge

const options = {
 // Our challenge should be a base64-url encoded string
 challenge: new TextEncoder().encode(challenge),
 rp: {
  id: window.location.host,
  name: document.title,
 },
 user: {
  id: new TextEncoder().encode("my-user-id"),
  name: 'John',
  displayName: 'John Smith',
 },
 pubKeyCredParams: [ // See COSE algorithms for more: https://www.iana.org/assignments/cose/cose.xhtml#algorithms
  {
   type: 'public-key',
   alg: -7, // ES256
  },
  {
   type: 'public-key',
   alg: -256, // RS256
  },
  {
   type: 'public-key',
   alg: -37, // PS256
  },
 ],
 authenticatorSelection: {
  userVerification: 'preferred', // Do you want to use biometrics or a pin?
  residentKey: 'required', // Create a resident key e.g. passkey
 },
 attestation: 'indirect', // indirect, direct, or none
 timeout: 60_000,
};

// Create the credential through the Authenticator
const credential = await navigator.credentials.create({
 publicKey: options
});

// Our main attestation response. See: https://developer.mozilla.org/en-US/docs/Web/API/AuthenticatorAttestationResponse
const attestation = credential.response as AuthenticatorAttestationResponse;

// Now send this information off to the Relying Party
// An unencoded example payload with most of the useful information
const payload = {
 kid: credential.id,
 clientDataJSON: attestation.clientDataJSON,
 attestationObject: attestation.attestationObject,
 pubkey: attestation.getPublicKey(),
 coseAlg: attestation.getPublicKeyAlgorithm(),
};

The AuthenticatorAttestationResponse contains the clientDataJSON as well as the attestationObject. We also have a couple of useful methods that save us from trying to retrieve the public key from the attestationObject and retrieving the COSE algorithm of the public key: getPublicKey and getPublicKeyAlgorithm.

Let’s dig into these pieces a little further.

Parsing The Attestation clientDataJSON

The clientDataJSON object is composed of a few fields we need. We can convert it to a workable object by decoding it and then running it through JSON.parse.

type DecodedClientDataJSON = {
 challenge: string,
 origin: string,
 type: string
};

const decoded: DecodedClientDataJSON = JSON.parse(new TextDecoder().decode(attestation.clientDataJSON));
const {
 challenge,
 origin,
 type
} = decoded;

Now we have a few fields to check against: challenge, origin, type.

Our challenge is the Base64-url encoded string that was passed to the server. The origin is the host (e.g., https://my.passkeys.com) of the server we used to generate the passkey. Meanwhile, the type is webauthn.create. The server should verify that all the values are expected when parsing the clientDataJSON.

Decoding TheattestationObject

The attestationObject is a CBOR encoded object. We need to use a CBOR decoder to actually see what it contains. We can use a package like cbor-x for that.

import { decode } from 'cbor-x/decode';

enum DecodedAttestationObjectFormat {
  none = 'none',
  packed = 'packed',
}
type DecodedAttestationObjectAttStmt = {
  x5c?: Uint8Array[];
  sig?: Uint8Array;
};

type DecodedAttestationObject = {
  fmt: DecodedAttestationObjectFormat;
  authData: Uint8Array;
  attStmt: DecodedAttestationObjectAttStmt;
};

const decodedAttestationObject: DecodedAttestationObject = decode(
 new Uint8Array(attestation.attestationObject)
);

const {
 fmt,
 authData,
 attStmt,
} = decodedAttestationObject;

fmt will often be evaluated to "none" here for passkeys. Other types of fmt are generated through other types of authenticators.

Accessing authData

The authData is a buffer of values with the following structure:

Name Length (bytes) Description
rpIdHash 32 This is the SHA-256 hash of the origin, e.g., my.passkeys.com.
flags 1 Flags determine multiple pieces of information (specification).
signCount 4 This should always be 0000 for passkeys.
attestedCredentialData variable This will contain credential data if it’s available in a COSE key format.
extensions variable These are any optional extensions for authentication.

It is recommended to use the getPublicKey method here instead of manually retrieving the attestedCredentialData.

A Note About The attStmt Object

This is often an empty object for passkeys. However, in other cases of a packed format, which includes the sig, we will need to perform some authentication to verify the sig. This is out of the scope of this article, as it often requires a hardware key or some other type of device-based login.

Retrieving The Encoded Public Key

The getPublicKey method can retrieve the Subject Public Key Info (SPKI) encoded version of the public key, which is a different from the COSE key format (more on that next) within the attestedCredentialData that the decodedAttestationObject.attStmt has. The SPKI format has the benefit of being compatible with a Web Crypto importKey function to more easily verify assertion signatures in the next phase.

// Example of importing attestation public key directly into Web Crypto
const pubkey = await crypto.subtle.importKey(
  'spki',
  attestation.getPublicKey(),
  { name: "ECDSA", namedCurve: "P-256" },
  true,
  ['verify']
);

Generating Keys With COSE Algorithms

The algorithms that can be used to generate cryptographic material for a passkey are specified by their COSE Algorithm. For passkeys generated for the web, we want to be able to generate keys using the following algorithms, as they are supported natively in Web Crypto. Personally, I prefer ECDSA-based algorithms since the key sizes are quite a bit smaller than RSA keys.

The COSE algorithms are declared in the pubKeyCredParams array within the AuthenticatorAttestationResponse. We can retrieve the COSE algorithm from the attestationObject with the getPublicKeyAlgorithm method. For example, if getPublicKeyAlgorithm returned -7, we’d know that the key used the ES256 algorithm.

Name Value Description
ES512 -36 ECDSA w/ SHA-512
ES384 -35 ECDSA w/ SHA-384
ES256 -7 ECDSA w/ SHA-256
RS512 -259 RSASSA-PKCS1-v1_5 using SHA-512
RS384 -258 RSASSA-PKCS1-v1_5 using SHA-384
RS256 -257 RSASSA-PKCS1-v1_5 using SHA-256
PS512 -39 RSASSA-PSS w/ SHA-512
PS384 -38 RSASSA-PSS w/ SHA-384
PS256 -37 RSASSA-PSS w/ SHA-256

Responding To The Attestation Payload

I want to show you an example of a response we would send to the server for registration. In short, the safeByteEncode function is used to change the buffers into Base64-url encoded strings.

type AttestationCredentialPayload = {
  kid: string;
  clientDataJSON: string;
  attestationObject: string;
  pubkey: string;
  coseAlg: number;
};

const payload: AttestationCredentialPayload = {
  kid: credential.id,
  clientDataJSON: safeByteEncode(attestation.clientDataJSON),
  attestationObject: safeByteEncode(attestation.attestationObject),
  pubkey: safeByteEncode(attestation.getPublicKey() as ArrayBuffer),
  coseAlg: attestation.getPublicKeyAlgorithm(),
};

The credential id (kid) should always be captured to look up the user’s keys, as it will be the primary key in the public_keys table.

From there:

  1. The server would check the clientDataJSON to ensure the same challenge is used.
  2. The origin is checked, and the type is set to webauthn.create.
  3. We check the attestationObject to ensure it has an fmt of none, the rpIdHash of the authData, as well as any flags and the signCount.

Optionally, we could check to see if the attestationObject.attStmt has a sig and verify the public key against it, but that’s for other types of WebAuthn flows we won’t go into.

We should store the public key and the COSE algorithm in the database at the very least. It is also beneficial to store the attestationObject in case we require more information for verification. The signCount is always incremented on every login attempt if supporting other types of WebAuthn logins; otherwise, it should always be for 0000 for a passkey.

Asserting Yourself

Now we have to retrieve a stored passkey using the navigator.credentials.get API. From it, we receive the AuthenticatorAssertionResponse, which we want to send portions of to the Relying Party for verification.

const { challenge } = await (await fetch("/assertion/generate")).json(); // Server call mock to get a random challenge

const options = {
  challenge: new TextEncoder().encode(challenge),
  rpId: window.location.host,
  timeout: 60_000,
};

// Sign the challenge with our private key via the Authenticator
const credential = await navigator.credentials.get({
  publicKey: options,
  mediation: 'optional',
});

// Our main assertion response. See: <https://developer.mozilla.org/en-US/docs/Web/API/AuthenticatorAssertionResponse>
const assertion = credential.response as AuthenticatorAssertionResponse;

// Now send this information off to the Relying Party
// An example payload with most of the useful information
const payload = {
  kid: credential.id,
  clientDataJSON: safeByteEncode(assertion.clientDataJSON),
  authenticatorData: safeByteEncode(assertion.authenticatorData),
  signature: safeByteEncode(assertion.signature),
};

The AuthenticatorAssertionResponse again has the clientDataJSON, and now the authenticatorData. We also have the signature that needs to be verified with the stored public key we captured in the attestation phase.

Decoding The Assertion clientDataJSON

The assertion clientDataJSON is very similar to the attestation version. We again have the challenge, origin, and type. Everything is the same, except the type is now webauthn.get.

type DecodedClientDataJSON = {
  challenge: string,
  origin: string,
  type: string
};

const decoded: DecodedClientDataJSON = JSON.parse(new TextDecoder().decode(assertion.clientDataJSON));
const {
  challenge,
  origin,
  type
} = decoded;

Understanding The authenticatorData

The authenticatorData is similar to the previous attestationObject.authData, except we no longer have the public key included (e.g., the attestedCredentialData ), nor any extensions.

Name Length (bytes) Description
rpIdHash 32 This is a SHA-256 hash of the origin, e.g., my.passkeys.com.
flags 1 Flags that determine multiple pieces of information (specification).
signCount 4 This should always be 0000 for passkeys, just as it should be for authData.

Verifying The signature

The signature is what we need to verify that the user trying to log in has the private key. It is the result of the concatenation of the authenticatorData and clientDataHash (i.e., the SHA-256 version of clientDataJSON).

To verify with the public key, we need to also concatenate the authenticatorData and clientDataHash. If the verification returns true, we know that the user is who they say they are, and we can let them authenticate into the application.

Here’s an example of how this is calculated:

const clientDataHash = await crypto.subtle.digest(
  'SHA-256',
  assertion.clientDataJSON
);
// For concatBuffer see: <https://github.com/nealfennimore/passkeys/blob/main/src/utils.ts#L31>
const data = concatBuffer(
  assertion.authenticatorData,
  clientDataHash
);

// NOTE: the signature from the assertion is in ASN.1 DER encoding. To get it working with Web Crypto
//We need to transform it into r|s encoding, which is specific for ECDSA algorithms)
//
// For fromAsn1DERtoRSSignature see: <https://github.com/nealfennimore/passkeys/blob/main/src/crypto.ts#L60>'
const isVerified = await crypto.subtle.verify(
  { name: 'ECDSA', hash: 'SHA-256' },
  pubkey,
  fromAsn1DERtoRSSignature(signature, 256),
  data
);

Sending The Assertion Payload

Finally, we get to send a response to the server with the assertion for logging into the application.

type AssertionCredentialPayload = {
  kid: string;
  clientDataJSON: string;
  authenticatorData: string;
  signature: string;
};

const payload: AssertionCredentialPayload = {
  kid: credential.id,
  clientDataJSON: safeByteEncode(assertion.clientDataJSON),
  authenticatorData: safeByteEncode(assertion.authenticatorData),
  signature: safeByteEncode(assertion.signature),
};

To complete the assertion phase, we first look up the stored public key, kid.

Next, we verify the following:

  • clientDataJSON again to ensure the same challenge is used,
  • The origin is the same, and
  • That the type is webauthn.get.

The authenticatorData can be used to check the rpIdHash, flags, and the signCount one more time. Finally, we take the signature and ensure that the stored public key can be used to verify that the signature is valid.

At this point, if all went well, the server should have verified all the information and allowed you to access your account! Congrats — you logged in with passkeys!

No More Passwords?

Do passkeys mean the end of passwords? Probably not… at least for a while anyway. Passwords will live on. However, there’s hope that more and more of the industry will begin to use passkeys. You can already find it implemented in many of the applications you use every day.

Passkeys was not the only implementation to rely on cryptographic means of authentication. A notable example is SQRL (pronounced “squirrel”). The industry as a whole, however, has decided to move forth with passkeys.

Hopefully, this article demystified some of the internal workings of passkeys. The industry as a whole is going to be using passkeys more and more, so it’s important to at least get acclimated. With all the security gains that passkeys provide and the fact that it’s resistant to phishing attacks, we can at least be more at ease browsing the internet when using them.

]]>
hello@smashingmagazine.com (Neal Fennimore)
<![CDATA[What I Wish I Knew About Working In Development Right Out Of School]]> https://smashingmagazine.com/2023/10/beginner-web-development-working-career/ https://smashingmagazine.com/2023/10/beginner-web-development-working-career/ Fri, 27 Oct 2023 15:00:00 GMT My journey in front-end web development started after university. I had no idea what I was going into, but it looked easy enough to get my feet wet at first glance. I dug around Google and read up on tons of blog posts and articles about a career in front-end. I did bootcamps and acquired a fancy laptop. I thought I was good to go and had all I needed.

Then reality started to kick in. It started when I realized how vast of a landscape Front-End Land is. There are countless frameworks, techniques, standards, workflows, and tools — enough to fill a virtual Amazon-sized warehouse. Where does someone so new to the industry even start? My previous research did nothing to prepare me for what I was walking into.

Fast-forward one year, and I feel like I’m beginning to find my footing. By no means do I consider myself a seasoned veteran at the moment, but I have enough road behind me to reflect back on what I’ve learned and what I wish I knew about the realities of working in front-end development when starting out. This article is about that.

The Web Is Big Enough For Specializations

At some point in my journey, I enrolled myself in a number of online courses and bootcamps to help me catch up on everything from data analytics to cybersecurity to software engineering at the same time. These were things I kept seeing pop up in articles. I was so confused; I believed all of these disciplines were interchangeable and part of the same skill set.

But that is just what they are: disciplines.

What I’ve come to realize is that being an “expert” in everything is a lost cause in the ever-growing World Wide Web.

Sure, it’s possible to be generally familiar with a wide spectrum of web-related skills, but it’s hard for me to see how to develop “deep” learning of everything. There will be weak spots in anyone’s skillset.

It would take a lifetime masterclass to get everything down-pat. Thank goodness there are ways to specialize in specific areas of the web, whether it is accessibility, performance, standards, typography, animations, interaction design, or many others that could fill the rest of this article. It’s OK to be one developer with a small cocktail of niche specialties. We need to depend on each other as much as any Node package in a project relies on a number of dependencies.

Burnout And Imposter Syndrome Are Real

My initial plan for starting my career was to master as many skills as possible and start making a living within six months. I figured if I could have a wide set of strong skills, then maybe I could lean on one of them to earn money and continue developing the rest of my skills on my way to becoming a full-stack developer.

I got it wrong. It turned out that I was chasing my tail in circles, trying to be everything to everyone. Just as I’d get an “a-ha!” moment learning one thing, I’d see some other new framework, CSS feature, performance strategy, design system, and so on in my X/Twitter feed that was calling my attention. I never really did get a feeling of accomplishment; it was more a fear of missing out and that I was an imposter disguised as a front-ender.

I continued burning the candle at both ends to absorb everything in my path, thinking I might reach some point at which I could call myself a full-stack developer and earn the right to slow down and coast with my vast array of skills. But I kept struggling to keep up and instead earned many sleepless nights cramming in as much information as I could.

Burnout is something I don’t wish on anyone. I was tired and mentally stressed. I could have done better. I engaged in every Twitter space or virtual event I could to learn a new trick and land a steady job. Imagine that, with my busy schedule, I still pause it to listen to hours of online events. I had an undying thirst for knowledge but needed to channel it in the right direction.

We Need Each Other

I had spent so much time and effort consuming information with the intensity of a firehose running at full blast that I completely overlooked what I now know is an essential asset in this industry: a network of colleagues.

I was on my own. Sure, I was sort of engaging with others by reading their tutorials, watching their video series, reading their social posts, and whatnot. But I didn’t really know anyone personally. I became familiar with all the big names you probably know as well, but it’s not like I worked or even interacted with anyone directly.

What I know now is that I needed personal advice every bit as much as more technical information. It often takes the help of someone else to learn how to ride a bike, so why wouldn’t it be the same for writing code?

Having a mentor or two would have helped me maintain balance throughout my technical bike ride, and now I wish I had sought someone out much earlier.

I should have asked for help when I needed it rather than stubbornly pushing forward on my own. I was feeding my burnout more than I was making positive progress.

Start With The Basics, Then Scale Up

My candid advice from my experience is to start learning front-end fundamentals. HTML and CSS are unlikely to go away. I mean, everything parses in HTML at the end of the day, right? And CSS is used on 97% of all websites.

The truth is that HTML and CSS are big buckets, even if they are usually discounted as “basic” or “easy” compared to traditional programming languages. Writing them well matters for everything. Sure, go ahead and jump straight to JavaScript, and it’s possible to cobble together a modern web app with an architecture of modular components. You’ll still need to know how your work renders and ensure it’s accessible, semantic, performant, cross-browser-supported, and responsive. You may pick those skills up along the way, but why not learn them up-front when they are essential to a good user experience?

So, before you click on yet another link extolling the virtues of another flavor of JavaScript framework, my advice is to start with the essentials:

  • What is a “semantic” HTML element?
  • What is the CSS Box Model, and why does it matter?
  • How does the CSS Cascade influence the way we write styles?
  • How does a screenreader announce elements on a page?
  • What is the difference between inline and block elements?
  • Why do we have logical properties in CSS when we already have physical ones?
  • What does it mean to create a stacking context or remove an element from the document flow?
  • How do certain elements look in one browser versus another?

The list could go on and on. I bet many of you know the answers. I wonder, though, how many you could explain effectively to someone beginning a front-end career. And, remember, things change. New standards are shipped, new tricks are discovered, and certain trends will fade as quickly as they came. While staying up-to-date with front-end development on a macro level is helpful, I’ve learned to integrate specific new technologies and strategies into my work only when I have a use case for them and concentrate more on my own learning journey — establish a solid foundation with the essentials, then progress to real-life projects.

Progress is a process. May as well start with evergreen information and add complexity to your knowledge when you need it instead of drinking from the firehose at all times.

There’s A Time And Place For Everything

I’ll share a personal story. I spent over a month enrolled in a course on React. I even had to apply for it first, so it was something I had to be accepted into — and I was! I was super excited.

I struggled in the class, of course. And, yes, I dropped out of the program after the first month.

I don’t believe struggling with the course or dropping out of it is any indication of my abilities. I believe it has a lot more to do with timing. The honest truth is that I thought learning React before the fundamentals of front-end development was the right thing to do. React seemed to be the number one thing that everyone was blogging about and what every employer was looking for in a new hire. The React course I was accepted into was my ticket to a successful and fulfilling career!

My motive was right, but I was not ready for it. I should have stuck with the basics and scaled up when I was good and ready to move forward. Instead of building up, I took a huge shortcut and wound up paying for it in the end, both in time and money.

That said, there’s probably no harm in dipping your toes in the water even as you learn the basics. There are plenty of events, hackathons, and coding challenges that offer safe places to connect and collaborate with others. Engaging in some of these activities early on may be a great learning opportunity to see how your knowledge supports or extends someone else’s skills. It can help you see where you fit in and what considerations go into real-life projects that require other people.

There was a time and place for me to learn React. The problem is I jumped the gun and channeled my learning energy in the wrong direction.

If I Had To Do It All Over Again…

This is the money question, right? Everyone wants to know exactly where to start, which classes to take, what articles to read, who to follow on socials, where to find jobs, and so on. The problem with highly specific advice like this is that it’s highly personalized as well. In other words, what has worked for me may not exactly be the right recipe for you.

It’s not the most satisfying answer, but the path you take really does depend on what you want to do and where you want to wind up. Aside from gaining a solid grasp on the basics, I wouldn’t say your next step is jumping into React when your passion is web typography. Both are skill sets that can be used together but are separate areas of concern that have different learning paths.

So, what would I do differently if I had the chance to do this all over again?

For starters, I wouldn’t skip over the fundamentals like I did. I would probably find opportunities to enhance my skills in those areas, like taking the FreeCodeCamp’s responsive web design course or practice recreating designs from the Figma community in CodePen to practice thinking strategically about structuring my code. Then, I might move on to the JavaScript Algorithms and Data Structures course to level up basic JavaScript skills.

The one thing I know I would do right away, though, is to find a mentor whom I can turn to when I start feeling as though I’m struggling and falling off track.

Or maybe I should have started by learning how to learn in the first place. Figuring out what kind of learner I am and familiarizing myself with learning strategies that help me manage my time and energy would have gone a long way.

Oh, The Places You’ll Go!

Front-end development is full of opinions. The best way to navigate this world is by mastering the basics. I shared my journey, mistakes, and ways of doing things differently if I were to start over. Rather than prescribing you a specific way of going about things or giving you an endless farm of links to all of the available front-end learning resources, I’ll share a few that I personally found helpful.

In the end, I’ve found that I care a lot about contributing to open-source projects, participating in hackathons, having a learning plan, and interacting with mentors who help me along the way, so those are the buckets I’m organizing things into.

Open Source Programs

Hackathons

Developer Roadmaps

Mentorship

Whatever your niche is, wherever your learning takes you, just make sure it’s yours. What works for one person may not be the right path for you, so spend time exploring the space and picking out what excites you most. The web is big, and there is a place for everyone to shine, especially you.

]]>
hello@smashingmagazine.com (Victoria Johnson)
<![CDATA[An Actionable And Reliable Usability Questionnaire With Only 7 Items: <span style="font-variant: small-caps">Inuit</span>]]> https://smashingmagazine.com/2023/10/actionable-reliable-usability-questionnaire-inuit/ https://smashingmagazine.com/2023/10/actionable-reliable-usability-questionnaire-inuit/ Thu, 26 Oct 2023 10:00:00 GMT Inuit (short for “**In**terface **U**sability **I**nstrumen**t**”) is a new questionnaire you can use to assess the usability of your user interface. It has been designed to be more diagnostic than existing usability instruments like, e.g., SUS and for use with machine learning, all the while asking fewer questions than other questionnaires. This article explores how and why Inuit has been developed and why we can be sure that it actually measures usability, and reliably so.]]> A lot of contemporary usability evaluation relies on easily measurable and readily available metrics like conversion rates, task success rates, and time on task, even though it’s questionable how well these are suited for reliably capturing a concept as complex as usability in its entirety.

The same holds for user experience. When an instrument is used to measure usability, e.g., in controlled user studies or via live intercepts, it’s often the simple single ease question, which is generally not a bad choice, but has its limits.

Note: For more information on usability evaluation, you can check the article “Current Practice in Measuring Usability: Challenges to Usability Studies and Research” by Kasper Hornbæk and “Growth Marketing Considered Harmful” by Maximilian Speicher.

Ultimately, when you intend to precisely and reliably measure the usability of a digital product, there’s no way around a scientifically well-founded instrument or, in everyday terms, a “questionnaire.” The most famous one is probably SUS, the System Usability Scale, but there are also some others around. Two examples are UMUX, the Usability Measure for User Experience, and SUMI, the Software Usability Measurement Inventory.

To join this party, in this article, we introduce Inuit (the Interface Usability Instrument), a new usability questionnaire. We will share how and why it was developed and how it’s different from the questionnaires mentioned above.

To immediately cut to the chase: With a scale from 1 (“completely disagree”) to 5 (“completely agree”), Inuit looks as follows. The parts in square brackets can be adapted to your specific interface, e.g., products in an online shop, articles on a news website, or results in a search engine.

Q1 I found [the information] I was looking for.
Q2 I could easily understand [the provided information].
Q3 I was confused using [the interface].
Q4 I was distracted by elements of [the interface].
Q5 Typography and layout added to readability.
Q6 There was too much information presented in too little space.
Q7 [My desired information] was easily reachable.

The Inuit metric (a score between 0 to 100, analogous to SUS) can then be calculated as follows:

(Q1 + Q2 + Q5 + Q7 - Q3 - Q4 - Q6 + 11) * 100/28

Why 11 and 28?

We have seven items rated on a scale from 1 to 5, but for some (Q1, Q2, Q5, Q7), 5 is the best rating, and for some (Q3, Q4, Q6), 1 is the best rating. Hence, we need to subtract the latter from 6 when we add up everything: Q1 + Q2 + Q5 + Q7 + (6-Q3) + (6-Q4) + (6-Q6) = Q1 + Q2 + Q5 + Q7 - Q3 - Q4 - Q6 + 18. This gives us an overall score between 7 and 35.
Now, we want to normalize this to a score between 0 and 100. For this, we first subtract 7 for a score between 0 and 28: Q1 + Q2 + Q5 + Q7 - Q3 - Q4 - Q6 + 18 - 7 = Q1 + Q2 + Q5 + Q7 - Q3 - Q4 - Q6 + 11. Finally, for a score between 0 and 100, we need to divide everything by 28 and multiply by 100: (Q1 + Q2 + Q5 + Q7 - Q3 - Q4 - Q6 + 11) * 100/28.

You might have noticed that compared to, e.g., SUS with 10, Inuit consists of only 7 questions. Apart from that, it has two more advantages:

  • Inuit has been designed to provide training data for machine-learning models that can then automatically predict usability from user interactions or web analytics data.
  • Its items (i.e., the questions) are diagnostic, at least to a certain degree. This means you see what’s wrong with your interface simply by looking at the results from the questionnaire. Have a bad rating for readability (Q5)? You should make the text in your interface more readable.

Now, at this point, you can either accept all this and simply get going with Inuit to measure the usability of your digital product (we’d be delighted). Or, if you’re interested in the details, you’re very welcome to keep reading (we’d be even more delighted).

“So, Why Did You Develop Yet Another Usability Questionnaire?”

You probably already guessed that Inuit wasn’t developed just for fun or because there aren’t enough questionnaires around. But to answer this, we have to reach back a bit.

In 2014, Max was a Ph.D. student busy working on his dissertation. The goal of it all was to find a way to determine the usability of an interface automatically from users’ interactions, such as what they do with the mouse cursor and how they scroll, rather than making participants in a user study fill out pages and pages of questions. Additionally, the cherry on top should be to also automatically propose optimizations for the interface (e.g., if user interactions suggest the interface is not readable, make the text larger).

To be able to achieve this, however, it was first necessary to determine if and how well certain interactions (mouse cursor movements, mouse cursor speed, scrolling behavior, and so on) predict the usability — or rather its individual aspects — of an interface. This meant collecting training data through users’ interactions with an interface and their usability assessments of that interface. Then, one could investigate how well (combinations of) tracked interactions predict (aspects of) usability using regression and/or machine-learning models. So far, so good, as far as the theory is concerned.

In practice, one important decision that would have huge implications for the project was how to collect the usability assessments mentioned above when gathering the training data. Since usability is a latent variable, meaning it can’t be observed directly, a proper instrument (i.e., a questionnaire) is necessary to assess it. And the most famous one is undeniably the System Usability Scale (SUS). It should’ve been an obvious choice, shouldn’t it?

A closer look showed that, while SUS would be perfectly well suited to train statistical models to infer usability from interactions, it simply wasn’t the perfect fit. This was the case mainly for two reasons:

  1. First, many questions contained in SUS (“I think that I would like to use this system frequently,” “I found the various functions in this system were well integrated,” and “I felt very confident using the system,” among others) describe the effects of good or bad usability — users feel confident because the system is well usable and so on. But they don’t describe the aspects of usability that cause them, e.g., bad understandability. This makes it difficult to know what should be done to make it better. What exactly should we change to make users feel more confident? The questions are not diagnostic or “actionable” and require further qualitative research to uncover the causes of bad ratings. It’s the same for UMUX and SUMI.
  2. Second, with just 10 items, SUS is already a very small questionnaire. However, the fewer items, the less friction and the more motivated users are to actually answer. So, is ten really the minimum, or would a proper questionnaire with fewer items be possible?

With these considerations in mind, Max went on and ultimately developed Inuit, the instrument presented in the introduction. He ended up with seven items that were better suited for the needs of his Ph.D. project and more actionable than those of SUS.

“How do you know this actually measures usability?”

Inuit was developed in a two-step process. The first step was a review of established guidelines and checklists with more than 250 rules for good usability, which were filtered based on the requirements above and resulted in a first draft for the new usability instrument. This draft was then discussed and refined in expert interviews with nine usability professionals.

The final draft of Inuit, with the seven factors informativeness (Q1), understandability (Q2), confusion (Q3), distraction (Q4), readability (Q5), information density (Q6), and reachability (Q7), was evaluated using a confirmatory factor analysis (CFA).

CFA is a method for assessing construct validity, which means it “is used to test whether measures of a construct are consistent with a researcher’s understanding of the nature of that construct” or “to test whether the data fit a hypothesized measurement model.”
Wikipedia

Put very simply, by using a CFA, we can check how well a theory matches the practice. In our case, the “construct” or “hypothesized measurement model” (theory) was Inuit, and the data (practice) came from a user study with 81 participants in which four news websites were evaluated using an Inuit questionnaire.

In a CFA, there are various metrics that show how well a construct fits the data. Two well-established ones are CFI, the comparative fit index, and RMSEA, the root mean square error of approximation — both range from 0 to 1.

For CFI, 0.95 or higher is “accepted as an indicator of good fit” (Wikipedia). Inuit’s value was 0.971. For RMSEA, “values less than 0.05 are good, values between 0.05 and 0.08 are acceptable” (Kim et al.). Inuit’s value was 0.063. This means our theory matches the practice, or Inuit’s questions do indeed measure usability.

Case Study #1

Inuit was first put into practice in 2014 at Unister GmbH, which at that time ran travel search engines like fluege.de and reisen.de, and was developing an entirely new semantic search engine. The results page of this search engine, named BlueKiwi, was evaluated in a user study with 81 participants using Inuit.

In this first study, the overall score averaged across all participants was 59.9. Ratings were especially bad for informativeness (Q1), information density (Q6), and reachability (Q7). Based on these results, BlueKiwi’s search results page was redesigned.

Among other things, the number of advertisements was reduced (better reachability), search results were displayed more concisely (better informativeness), and everything was more clearly aligned and separated (better information density). See the figure below for the full list of changes.

After the redesign, we ran another study, in which the overall Inuit score increased to 67.5 (+11%), with improvements in every single one of the seven items.

“Why Wait 9 Years To Write This Article?”

There were various factors at play. One was what’s called the research–practice gap. It’s often difficult for academic work to gain traction outside the academic community. One reason for this is that work that is part of a Ph.D. project is often a little neglected after it has served its purpose — being published in a research paper, included in a thesis, and presented at a Ph.D. defense — which is pretty much exactly what happened to Inuit.

Case Study #2

Another factor, however, was that we wanted to put the instrument into practice in a real-world industry setting over a longer period of time first, and we got the chance to do that only relatively recently.

We ran a longitudinal study over a period of almost two years in which we ran quarterly benchmarks of multiple e-commerce websites using both SUS and Inuit, with a total of 6,368 users. The results of these benchmarks were included in the dashboard of product KPIs and regularly shared with the team of 6 product managers. After roughly two years of conducting and sharing benchmarks, we interviewed the product managers about their use of the data, challenges, wishes, and potential for improvement.

What a high-level analysis showed was that all of the product managers, in one way or another, described Inuit as more intuitive to understand, less abstract, and more actionable compared to SUS when looking at both instruments as a whole.

They found most of Inuit’s items more specific and easier to interpret and, therefore, more relevant from a product manager’s perspective. SUS, in contrast, was described as, e.g., “good for [the] overall score” and the bird’s eye view. Virtually all product managers, however, wished for even more specific insights into where exactly on the website usability problems occur. One suggested building an optimal instrument by combining certain items from both SUS and Inuit.

As part of the analysis, we computed Cronbach’s α for Inuit (based on 3190 answers) as well as SUS (based on 3178 answers).

Cronbach’s α is a statistical measure for the internal consistency of an instrument, which can be interpreted as “the extent to which all of the items of a test measure the same latent variable [i.e., usability].”
Wikipedia

Values of 0.7 or above are generally deemed acceptable. Inuit reached a value of 0.7; SUS a value of 0.8.

To top things off, Inuit and SUS showed a considerable (Pearson’s r = 0.53) and highly significant (p < 0.001) correlation when looking at overall scores aggregated over the different e-commerce websites and tasks the study participants had to complete.

In layman’s terms, When the SUS score goes up, the Inuit score goes up; when the SUS score goes down, the Inuit score goes down. Both questionnaires measure the same thing (with a very, very rough approximation of INUIT = 0.6 × SUS + 17).

Since these first results were so encouraging, we decided to write this general, more practice-oriented overview article about Inuit now. A deeper analysis of our big dataset, however, is yet to be conducted, and our current plan is to report findings in much more detail separately.

“Why Do You Think Inuit Is Better Than SUS?”

We don’t think so (or that it’s better than any scientifically founded usability instrument, for that matter). There are many ways to measure the same latent variable, in this case, usability. Both questionnaires, SUS and Inuit, have proven that they can measure the usability of an interface. Still, they were developed in different contexts and with different goals and requirements in mind.

So, to address the question of when it’s better to use which, as true researchers, we have to say “it depends” (annoying, isn’t it?).

SUS, which has been around since the 1990s, is probably the most popular and well-established usability instrument. It’s been studied and validated over and over, which Inuit, of course, can’t compete with yet and still has a long way to go. If the goal is to compare scores at a high level and even tap into public benchmark numbers for orientation, SUS would be preferable.

However, by design, Inuit has two advantages over SUS:

  1. Inuit has only seven items and is still a “complete” usability instrument.
    30% fewer questions can be a major factor when it comes to motivating users to fill out a questionnaire. Assuming that a big part of remote online studies is done quickly in passing and with short attention spans, designing efficient studies that generate reliable output and minimize effects like participant fatigue can be a major challenge for researchers.
  2. Inuit’s items have been specifically designed to be more actionable for practitioners and lend themselves better to manual analysis and inferring potential interface optimizations.
    As we’ve learned in our second case study, talking to actual product managers revealed that for them, the results of a usability assessment should always be as specific as possible. Comparing the items of both, Inuit points to more concrete areas to improve than SUS, which was perceived as rather vague.
“Where Can I Use Inuit?”

Generally, in any scenario that involves an interface and a task — either defined by you or the user themselves. In the studies mentioned and described above, we could demonstrate that Inuit works well in controlled as well as natural-use settings and with news websites, search engines, and e-commerce shops.

Now, of course, we can’t evaluate Inuit with any possible kind of interface, and that is part of the reason for this article. Inuit has been around and publicly available since 2014, and we have no idea if and how it has been used by other researchers, but if you do, please let us know about it. We’d be thrilled to hear about your experience and results.

The questions presented at the beginning of the article are relatively focused on finding information because that’s where Inuit is historically coming from and because most of the things users do involve the finding of information of some kind. (Please keep in mind that information doesn’t have to be text. On the contrary, most information is non-textual.) But those questions can be adapted as long as they still reflect the underlying aspects of usability, which are informativeness, understandability, confusion, distraction, readability, information density, and reachability.

Say, for instance, you want to evaluate a module from an e-learning course, e.g., in the form of an annotated video with a subsequent quiz. To accommodate the task at hand, Q1 could be rephrased to “I had all the information necessary to complete the module” and Q7 to “All the information necessary to complete the module was easily reachable.”

Conclusion

There are plenty of usability questionnaires out there, and we have added a new one to the pool — Inuit. Why? Because sometimes, you find yourself in a situation where none of the existing questionnaires is the perfect fit. Inuit has been designed to be more diagnostic than existing usability instruments like, e.g., SUS and for use with machine learning, all the while asking fewer questions than other questionnaires. So, if any of this seems relevant to your use cases or context of work, why not give it a try?

From a scientific and statistical point of view, in a confirmatory factor analysis (CFA), Inuit has demonstrated that its questions do indeed measure usability. On top of that, it’s consistent and correlates well with SUS, based on data from a large-scale, longitudinal user study.

Note: If you want to dive deeper into the science behind Inuit, e.g., how exactly the items/questions were chosen, you can read the corresponding research paper “Inuit: The Interface Usability Instrument,” which was presented at the 2015 HCI International Conference. If you want to learn more about how Inuit can be used to train machine-learning models, read “Ensuring Web Interface Quality through Usability-Based Split Testing.” And finally if you want to see how Inuit can be used as the basis for a tool that automatically proposes optimizations for an interface, you can refer to “S.O.S.: Does Your Search Engine Results Page (SERP) Need Help?” which was presented at the 2015 ACM Conference on Human Factors in Computing Systems.

References

]]>
hello@smashingmagazine.com (Maximilian Speicher & Johanna Jagow)
<![CDATA[The Fight For The Main Thread]]> https://smashingmagazine.com/2023/10/speedcurve-fight-main-thread/ https://smashingmagazine.com/2023/10/speedcurve-fight-main-thread/ Tue, 24 Oct 2023 18:00:00 GMT This article is a sponsored by SpeedCurve

Performance work is one of those things, as they say, that ought to happen in development. You know, have a plan for it and write code that’s mindful about adding extra weight to the page.

But not everything about performance happens directly at the code level, right? I’d say many — if not most — sites and apps rely on some number of third-party scripts where we might not have any influence over the code. Analytics is a good example. Writing a hand-spun analytics tracking dashboard isn’t what my clients really want to pay me for, so I’ll drop in the ol’ Google Analytics script and maybe never think of it again.

That’s one example and a common one at that. But what’s also common is managing multiple third-party scripts on a single page. One of my clients is big into user tracking, so in addition to a script for analytics, they’re also running third-party scripts for heatmaps, cart abandonments, and personalized recommendations — typical e-commerce stuff. All of that is dumped on any given page in one fell swoop courtesy of Google Tag Manager (GTM), which allows us to deploy and run scripts without having to go through the pain of re-deploying the entire site.

As a result, adding and executing scripts is a fairly trivial task. It is so effortless, in fact, that even non-developers on the team have contributed their own fair share of scripts, many of which I have no clue what they do. The boss wants something, and it’s going to happen one way or another, and GTM facilitates that work without friction between teams.

All of this adds up to what I often hear described as a “fight for the main thread.” That’s when I started hearing more performance-related jargon, like web workers, Core Web Vitals, deferring scripts, and using pre-connect, among others. But what I’ve started learning is that these technical terms for performance make up an arsenal of tools to combat performance bottlenecks.

The real fight, it seems, is evaluating our needs as developers and stakeholders against a user’s needs, namely, the need for a fast and frictionless page load.

Fighting For The Main Thread

We’re talking about performance in the context of JavaScript, but there are lots of things that happen during a page load. The HTML is parsed. Same deal with CSS. Elements are rendered. JavaScript is loaded, and scripts are executed.

All of this happens on the main thread. I’ve heard the main thread described as a highway that gets cars from Point A to Point B; the more cars that are added to the road, the more crowded it gets and the more time it takes for cars to complete their trip. That’s accurate, I think, but we can take it a little further because this particular highway has just one lane, and it only goes in one direction. My mind thinks of San Francisco’s Lombard Street, a twisty one-way path of a tourist trap on a steep decline.

The main thread may not be that curvy, but you get the point: there’s only one way to go, and everything that enters it must go through it.

JavaScript operates in much the same way. It’s “single-threaded,” which is how we get the one-way street comparison. I like how Brian Barbour explains it:

“This means it has one call stack and one memory heap. As expected, it executes code in order and must finish executing a piece of code before moving on to the next. It's synchronous, but at times that can be harmful. For example, if a function takes a while to execute or has to wait on something, it freezes everything up in the meantime.”

— Brian Barbour

So, there we have it: a fight for the main thread. Each resource on a page is a contender vying for a spot on the thread and wants to run first. If one contender takes its sweet time doing its job, then the contenders behind it in line just have to wait.

Monitoring The Main Thread

If you’re like me, I immediately reach for DevTools and open the Lighthouse tab when I need to look into a site’s performance. It covers a lot of ground, like reporting stats about a page’s load time that include Time to First Byte (TTFB), First Contentful Paint (FCP), Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), and so on.

I love this stuff! But I also am scared to death of it. I mean, this is stuff for back-end engineers, right? A measly front-end designer like me can be blissfully ignorant of all this mumbo-jumbo.

Meh, untrue. Like accessibility, performance is everyone’s job because everyone’s work contributes to it. Even the choice to use a particular CSS framework influences performance.

Total Blocking Time

One thing I know would be more helpful than a set of Core Web Vitals scores from Lighthouse is knowing the time it takes to go from the First Contentful Paint (FCP) to the Time to Interactive (TTI), a metric known as the Total Blocking Time (TBT). You can see that Lighthouse does indeed provide that metric. Let’s look at it for a site that’s much “heavier” than Smashing Magazine.

There we go. The problem with the Lighthouse report, though, is that I have no idea what is causing that TBT. We can get a better view if we run the same test in another service, like SpeedCurve, which digs deeper into the metric. We can expand the metric to glean insights into what exactly is causing traffic on the main thread.

That’s a nice big view and is a good illustration of TBT’s impact on page speed. The user is forced to wait a whopping 4.1 seconds between the time the first significant piece of content loads and the time the page becomes interactive. That’s a lifetime in web seconds, particularly considering that this test is based on a desktop experience on a high-speed connection.

One of my favorite charts in SpeedCurve is this one showing the distribution of Core Web Vitals metrics during render. You can see the delta between contentful paints and interaction!

Spotting Long Tasks

What I really want to see is JavaScript, which takes more than 50ms to run. These are called long tasks, and they contribute the most strain on the main thread. If I scroll down further into the report, all of the long tasks are highlighted in red.

Another way I can evaluate scripts is by opening up the Waterfall View. The default view is helpful to see where a particular event happens in the timeline.

But wait! This report can be expanded to see not only what is loaded at the various points in time but whether they are blocking the thread and by how much. Most important are the assets that come before the FCP.

First & Third Party Scripts

I can see right off the bat that Optimizely is serving a render-blocking script. SpeedCurve can go even deeper by distinguishing between first- and third-party scripts.

That way, I can see more detail about what’s happening on the Optimizely side of things.

Monitoring Blocking Scripts

With that in place, SpeedCurve actually lets me track all the resources from a specific third-party source in a custom graph that offers me many more data points to evaluate. For example, I can dive into scripts that come from Optimizely with a set of custom filters to compare them with overall requests and sizes.

This provides a nice way to compare the impact of different third-party scripts that represent blocking and long tasks, like how much time those long tasks represent.

Or perhaps which of these sources are actually render-blocking:

These are the kinds of tools that allow us to identify bottlenecks and make a case for optimizing them or removing them altogether. SpeedCurve allows me to monitor this over time, giving me better insight into the performance of those assets.

Monitoring Interaction to Next Paint

There’s going to be a new way to gain insights into main thread traffic when Interaction to Next Paint (INP) is released as a new core vital metric in March 2024. It replaces the First Input Delay (FID) metric.

What’s so important about that? Well, FID has been used to measure load responsiveness, which is a fancy way of saying it looks at how fast the browser loads the first user interaction on the page. And by interaction, we mean some action the user takes that triggers an event, such as a click, mousedown, keydown, or pointerdown event. FID looks at the time the user sparks an interaction and how long the browser processes — or responds to — that input.

FID might easily be overlooked when trying to diagnose long tasks on the main thread because it looks at the amount of time a user spends waiting after interacting with the page rather than the time it takes to render the page itself. It can’t be replicated with lab data because it’s based on a real user interaction. That said, FID is correlated to TBT in that the higher the FID, the higher the TBT, and vice versa. So, TBT is often the go-to metric for identifying long tasks because it can be measured with lab data as well as real-user monitoring (RUM).

But FID is wrought with limitations, the most significant perhaps being that it’s only a measure of the first interaction. That’s where INP comes into play. Instead of measuring the first interaction and only the first interaction, it measures all interactions on a page. Jeremy Wagner has a more articulate explanation:

“The goal of INP is to ensure the time from when a user initiates an interaction until the next frame is painted is as short as possible for all or most interactions the user makes.”
— Jeremy Wagner

Some interactions are naturally going to take longer to respond than others. So, we might think of FID as merely a first impression of responsiveness, whereas INP is a more complete picture. And like FID, the INP score is closely correlated with TBT but even more so, as Annie Sullivan reports:

Thankfully, performance tools are already beginning to bake INP into their reports. SpeedCurve is indeed one of them, and its report shows how its RUM capabilities can be used to illustrate the correlation between INP and long tasks on the main thread. This correlation chart illustrates how INP gets worse as the total long tasks’ time increases.

What’s cool about this report is that it is always collecting data, providing a way to monitor INP and its relationship to long tasks over time.

Not All Scripts Are Created Equal

There is such a thing as a “good” script. It’s not like I’m some anti-JavaScript bloke intent on getting scripts off the web. But what constitutes a “good” one is nuanced.

Who’s It Serving?

Some scripts benefit the organization, and others benefit the user (or both). The challenge is balancing business needs with user needs.

I think web fonts are a good example that serves both needs. A font is a branding consideration as well as a design asset that can enhance the legibility of a site’s content. Something like that might make loading a font script or file worth its cost to page performance. That’s a tough one. So, rather than fully eliminating a font, maybe it can be optimized instead, perhaps by self-hosting the files rather than connecting to a third-party domain or only loading a subset of characters.

Analytics is another difficult choice. I removed analytics from my personal site long ago because I rarely, if ever, looked at them. And even if I did, the stats were more of an ego booster than insightful details that helped me improve the user experience. It’s an easy decision for me, but not so easy for a site that lives and dies by reports that are used to identify and scope improvements.

If the script is really being used to benefit the user at the end of the day, then yeah, it’s worth keeping around.

When Is It Served?

A script may very well serve a valid purpose and benefit both the organization and the end user. But does it need to load first before anything else? That’s the sort of question to ask when a script might be useful, but can certainly jump out of line to let others run first.

I think of chat widgets for customer support. Yes, having a persistent and convenient way for customers to get in touch with support is going to be important, particularly for e-commerce and SaaS-based services. But does it need to be available immediately? Probably not. You’ll probably have a greater case for getting the site to a state that the user can interact with compared to getting a third-party widget up front and center. There’s little point in rendering the widget if the rest of the site is inaccessible anyway. It is better to get things moving first by prioritizing some scripts ahead of others.

Where Is It Served From?

Just because a script comes from a third party doesn’t mean it has to be hosted by a third party. The web fonts example from earlier applies. Can the font files be self-hosted instead rather than needing to establish another outside connection? It’s worth asking. There are self-hosted alternatives to Google Analytics, after all. And even GTM can be self-hosted! That’s why grouping first and third-party scripts in SpeedCurve’s reporting is so useful: spot what is being served and where it is coming from and identify possible opportunities.

What Is It Serving?

Loading one script can bring unexpected visitors along for the ride. I think the classic case is a third-party script that loads its own assets, like a stylesheet. Even if you think you’re only loading one stylesheet &mdahs; your own — it’s very possible that a script loads additional external stylesheets, all of which need to be downloaded and rendered.

Getting JavaScript Off The Main Thread

That’s the goal! We want fewer cars on the road to alleviate traffic on the main thread. There are a bunch of technical ways to go about it. I’m not here to write up a definitive guide of technical approaches for optimizing the main thread, but there is a wealth of material on the topic.

I’ll break down several different approaches and fill them in with resources that do a great job explaining them in full.

Use Web Workers

A web worker, at its most basic, allows us to establish separate threads that handle tasks off the main thread. Web workers run parallel to the main thread. There are limitations to them, of course, most notably not having direct access to the DOM and being unable to share variables with other threads. But using them can be an effective way to re-route traffic from the main thread to other streets, so to speak.

Split JavaScript Bundles Into Individual Pieces

The basic idea is to avoid bundling JavaScript as a monolithic concatenated file in favor of “code splitting” or splitting the bundle up into separate, smaller payloads to send only the code that’s needed. This reduces the amount of JavaScript that needs to be parsed, which improves traffic along the main thread.

Async or Defer Scripts

Both are ways to load JavaScript without blocking the DOM. But they are different! Adding the async attribute to a <script> tag will load the script asynchronously, executing it as soon as it’s downloaded. That’s different from the defer attribute, which is also asynchronous but waits until the DOM is fully loaded before it executes.

Preconnect Network Connections

I guess I could have filed this with async and defer. That’s because preconnect is a value on the rel attribute that’s used on a <link> tag. It gives the browser a hint that you plan to connect to another domain. It establishes the connection as soon as possible prior to actually downloading the resource. The connection is done in advance, allowing the full script to download later.

While it sounds excellent — and it is — pre-connecting comes with an unfortunate downside in that it exposes a user’s IP address to third-party resources used on the page, which is a breach of GDPR compliance. There was a little uproar over that when it was found out that using a Google Fonts script is prone to that as well.

Non-Technical Approaches

I often think of a Yiddish proverb I first saw in Malcolm Gladwell’s Outliers; however, many years ago it came out:

To a worm in horseradish, the whole world is horseradish.

It’s a more pleasing and articulate version of the saying that goes, “To a carpenter, every problem looks like a nail.” So, too, it is for developers working on performance. To us, every problem is code that needs a technical solution. But there are indeed ways to reduce the amount of work happening on the main thread without having to touch code directly.

We discussed earlier that performance is not only a developer’s job; it’s everyone’s responsibility. So, think of these as strategies that encourage a “culture” of good performance in an organization.

Nuke Scripts That Lack Purpose

As I said at the start of this article, there are some scripts on the projects I work on that I have no idea what they do. It’s not because I don’t care. It’s because GTM makes it ridiculously easy to inject scripts on a page, and more than one person can access it across multiple teams.

So, maybe compile a list of all the third-party and render-blocking scripts and figure out who owns them. Is it Dave in DevOps? Marcia in Marketing? Is it someone else entirely? You gotta make friends with them. That way, there can be an honest evaluation of which scripts are actually helping and are critical to balance.

Bend Google Tag Manager To Your Will

Or any tag manager, for that matter. Tag managers have a pretty bad reputation for adding bloat to a page. It’s true; they can definitely make the page size balloon as more and more scripts are injected.

But that reputation is not totally warranted because, like most tools, you have to use them responsibly. Sure, the beauty of something like GTM is how easy it makes adding scripts to a page. That’s the “Tag” in Google Tag Manager. But the real beauty is that convenience, plus the features it provides to manage the scripts. You know, the “Manage” in Google Tag Manager. It’s spelled out right on the tin!

Wrapping Up

Phew! Performance is not exactly a straightforward science. There are objective ways to measure performance, of course, but if I’ve learned anything about it, it’s that subjectivity is a big part of the process. Different scripts are of different sizes and consist of different resources serving different needs that have different priorities for different organizations and their users.

Having access to a free reporting tool like Lighthouse in DevTools is a great start for diagnosing performance issues by identifying bottlenecks on the main thread. Even better are paid tools like SpeedCurve to dig deeper into the data for more targeted insights and to produce visual reports to help make a case for performance improvements for your team and other stakeholders.

While I wish there were some sort of silver bullet to guarantee good performance, I’ll gladly take these and similar tools as a starting point. Most important, though, is having a performance game plan that is served by the tools. And Vitaly’s front-end performance checklist is an excellent place to start.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[What Removing Object Properties Tells Us About JavaScript]]> https://smashingmagazine.com/2023/10/removing-object-properties-javascript/ https://smashingmagazine.com/2023/10/removing-object-properties-javascript/ Mon, 23 Oct 2023 13:00:00 GMT A group of contestants are asked to complete the following task:

Make object1 similar to object2.
let object1 = {
  a: "hello",
  b: "world",
  c: "!!!",
};

let object2 = {
  a: "hello",
  b: "world",
};

Seems easy, right? Simply delete the c property to match object2. Surprisingly, each person described a different solution:

  • Contestant A: “I set c to undefined.”
  • Contestant B: “I used the delete operator.”
  • Contestant C: “I deleted the property through a Proxy object.”
  • Contestant D: “I avoided mutation by using object destructuring.”
  • Contestant E: “I used JSON.stringify and JSON.parse.”
  • Contestant F: “We rely on Lodash at my company.”

An awful lot of answers were given, and they all seem to be valid options. So, who is “right”? Let’s dissect each approach.

Contestant A: “I Set c To undefined.”

In JavaScript, accessing a non-existing property returns undefined.

const movie = {
  name: "Up",
};

console.log(movie.premiere); // undefined

It’s easy to think that setting a property to undefined removes it from the object. But if we try to do that, we will observe a small but important detail:

const movie = {
  name: "Up",
  premiere: 2009,
};

movie.premiere = undefined;

console.log(movie);

Here is the output we get back:

{name: 'up', premiere: undefined}

As you can see, premiere still exists inside the object even when it is undefined. This approach doesn’t actually delete the property but rather changes its value. We can confirm that using the hasOwnProperty() method:

const propertyExists = movie.hasOwnProperty("premiere");

console.log(propertyExists); // true

But then why, in our first example, does accessing object.premiere return undefined if the property doesn’t exist in the object? Shouldn’t it throw an error like when accessing a non-existing variable?

console.log(iDontExist);

// Uncaught ReferenceError: iDontExist is not defined

The answer lies in how ReferenceError behaves and what a reference is in the first place.

A reference is a resolved name binding that indicates where a value is stored. It consists of three components: a base value, the referenced name, and a strict reference flag.

For a user.name reference, the base value is the object, user, while the referenced name is the string, name, and the strict reference flag is false if the code isn't in strict mode.

Variables behave differently. They don’t have a parent object, so their base value is an environment record, i.e., a unique base value assigned each time the code is executed.

If we try to access something that doesn’t have a base value, JavaScript will throw a ReferenceError. However, if a base value is found, but the referenced name doesn’t point to an existing value, JavaScript will simply assign the value undefined.

“The Undefined type has exactly one value, called undefined. Any variable that has not been assigned a value has the value undefined.”

ECMAScript Specification

We could spend an entire article just addressing undefined shenanigans!

Contestant B: “I Used The delete Operator.”

The delete operator’s sole purpose is to remove a property from an object, returning true if the element is successfully removed.

const dog = {
  breed: "bulldog",
  fur: "white",
};

delete dog.fur;

console.log(dog); // {breed: 'bulldog'}

Some caveats come with the delete operator that we have to take into consideration before using it. First, the delete operator can be used to remove an element from an array. However, it leaves an empty slot inside the array, which may cause unexpected behavior since properties like length aren’t updated and still count the open slot.

const movies = ["Interstellar", "Top Gun", "The Martian", "Speed"];

delete movies[2];

console.log(movies); // ['Interstellar', 'Top Gun', empty, 'Speed']

console.log(movies.length); // 4

Secondly, let’s imagine the following nested object:

const user = {
  name: "John",
  birthday: {day: 14, month: 2},
};

Trying to remove the birthday property using the delete operator will work just fine, but there is a common misconception that doing this frees up the memory allocated for the object.

In the example above, birthday is a property holding a nested object. Objects in JavaScript behave differently from primitive values (e.g., numbers, strings, and booleans) as far as how they are stored in memory. They are stored and copied “by reference,” while primitive values are copied independently as a whole value.

Take, for example, a primitive value such as a string:

let movie = "Home Alone";
let bestSeller = movie;

In this case, each variable has an independent space in memory. We can see this behavior if we try to reassign one of them:

movie = "Terminator";

console.log(movie); // "Terminator"

console.log(bestSeller); // "Home Alone"

In this case, reassigning movie doesn't affect bestSeller since they are in two different spaces in memory. Properties or variables holding objects (e.g., regular objects, arrays, and functions) are references pointing to a single space in memory. If we try to copy an object, we are merely duplicating its reference.

let movie = {title: "Home Alone"};
let bestSeller = movie;

bestSeller.title = "Terminator";

console.log(movie); // {title: "Terminator"}

console.log(bestSeller); // {title: "Terminator"}

As you can see, they are now objects, and reassigning a bestSeller property also changes the movie result. Under the hood, JavaScript looks at the actual object in memory and performs the change, and both references point to the changed object.

Knowing how objects behave “by reference,” we can now understand how using the delete operator doesn’t free space in memory.

The process in which programming languages free memory is called garbage collection. In JavaScript, memory is freed for an object when there are no more references and it becomes unreachable. So, using the delete operator may make the property’s space eligible for collection, but there may be more references preventing it from being deleted from memory.

While we’re on the topic, it’s worth noting that there is a bit of a debate around the delete operator’s impact on performance. You can follow the rabbit trail from the link, but I’ll go ahead and spoil the ending for you: the difference in performance is so negligible that it wouldn’t pose a problem in the vast majority of use cases. Personally, I consider the operator’s idiomatic and straightforward approach a win over a minuscule hit to performance.

That said, an argument can be made against using delete since it mutates an object. In general, it’s a good practice to avoid mutations since they may lead to unexpected behavior where a variable doesn’t hold the value we assume it has.

Contestant C: “I Deleted The Property Through A Proxy Object.”

This contestant was definitely a show-off and used a proxy for their answer. A proxy is a way to insert some middle logic between an object’s common operations, like getting, setting, defining, and, yes, deleting properties. It works through the Proxy constructor that takes two parameters:

  • target: The object from where we want to create a proxy.
  • handler: An object containing the middle logic for the operations.

Inside the handler, we define methods for the different operations, called traps, because they intercept the original operation and perform a custom change. The constructor will return a Proxy object — an object identical to the target — but with the added middle logic.

const cat = {
  breed: "siamese",
  age: 3,
};

const handler = {
  get(target, property) {
    return `cat's ${property} is ${target[property]}`;
  },
};

const catProxy = new Proxy(cat, handler);

console.log(catProxy.breed); // cat's breed is siamese

console.log(catProxy.age); // cat's age is 3

Here, the handler modifies the getting operation to return a custom value.

Say we want to log the property we are deleting to the console each time we use the delete operator. We can add this custom logic through a proxy using the deleteProperty trap.

const product = {
  name: "vase",
  price: 10,
};

const handler = {
  deleteProperty(target, property) {
    console.log(`Deleting property: ${property}`);
  },
};

const productProxy = new Proxy(product, handler);

delete productProxy.name; // Deleting property: name

The name of the property is logged in the console but throws an error in the process:

Uncaught TypeError: 'deleteProperty' on proxy: trap returned falsish for property 'name'

The error is thrown because the handler didn’t have a return value. That means it defaults to undefined. In strict mode, if the delete operator returns false, it will throw an error, and undefined, being a falsy value, triggers this behavior.

If we try to return true to avoid the error, we will encounter a different sort of issue:

// ...

const handler = {
  deleteProperty(target, property) {
    console.log(`Deleting property: ${property}`);

    return true;
  },
};

const productProxy = new Proxy(product, handler);

delete productProxy.name; // Deleting property: name

console.log(productProxy); // {name: 'vase', price: 10}

The property isn’t deleted!

We replaced the delete operator’s default behavior with this code, so it doesn’t remember it has to “delete” the property.

This is where Reflect comes into play.

Reflect is a global object with a collection of all the internal methods of an object. Its methods can be used as normal operations anywhere, but it’s meant to be used inside a proxy.

For example, we can solve the issue in our code by returning Reflect.deleteProperty() (i.e., the Reflect version of the delete operator) inside of the handler.

const product = {
  name: "vase",
  price: 10,
};

const handler = {
  deleteProperty(target, property) {
    console.log(`Deleting property: ${property}`);

    return Reflect.deleteProperty(target, property);
  },
};

const productProxy = new Proxy(product, handler);

delete productProxy.name; // Deleting property: name

console.log(product); // {price: 10}

It is worth calling out that certain objects, like Math, Date, and JSON, have properties that cannot be deleted using the delete operator or any other method. These are “non-configurable” object properties, meaning that they cannot be reassigned or deleted. If we try to use the delete operator on a non-configurable property, it will fail silently and return false or throw an error if we are running our code in strict mode.

"use strict";

delete Math.PI;

Output:

Uncaught TypeError: Cannot delete property 'PI' of #<Object>

If we want to avoid errors with the delete operator and non-configurable properties, we can use the Reflect.deleteProperty() method since it doesn’t throw an error when trying to delete a non-configurable property — even in strict mode — because it fails silently.

I assume, however, that you would prefer knowing when you are trying to delete a global object rather than avoiding the error.

Contestant D: “I Avoided Mutation By Using Object Destructuring.”
Object destructuring is an assignment syntax that extracts an object’s properties into individual variables. It uses a curly braces notation ({}) on the left side of an assignment to tell which of the properties to get.
const movie = {
  title: "Avatar",
  genre: "science fiction",
};

const {title, genre} = movie;

console.log(title); // Avatar

console.log(genre); // science fiction

It also works with arrays using square brackets ([]):

const animals = ["dog", "cat", "snake", "elephant"];

const [a, b] = animals;

console.log(a); // dog

console.log(b); // cat

The spread syntax (...) is sort of like the opposite operation because it encapsulates several properties into an object or an array if they are single values.

We can use object destructuring to unpack the values of our object and the spread syntax to keep only the ones we want:

const car = {
  type: "truck",
  color: "black",
  doors: 4
};

const {color, ...newCar} = car;

console.log(newCar); // {type: 'truck', doors: 4}

This way, we avoid having to mutate our objects and the potential side effects that come with it!

Here’s an edge case with this approach: deleting a property only when it’s undefined. Thanks to the flexibility of object destructuring, we can delete properties when they are undefined (or falsy, to be exact).

Imagine you run an online store with a vast database of products. You have a function to find them. Of course, it will need some parameters, perhaps the product name and category.

const find = (product, category) => {
  const options = {
    limit: 10,
    product,
    category,
  };

  console.log(options);

  // Find in database...
};

In this example, the product name has to be provided by the user to make the query, but the category is optional. So, we could call the function like this:

find("bedsheets");

And since a category is not specified, it returns as undefined, resulting in the following output:

{limit: 10, product: 'beds', category: undefined}

In this case, we shouldn’t use default parameters because we aren’t looking for one specific category.

Notice how the database could incorrectly assume that we are querying products in a category called undefined! That would lead to an empty result, which is an unintended side effect. Even though many databases will filter out the undefined property for us, it would be better to sanitize the options before making the query. A cool way to dynamically remove an undefined property is through object destructing along with the AND operator (&&).

Instead of writing options like this:

const options = {
  limit: 10,
  product,
  category,
};

...we can do this instead:

const options = {
  limit: 10,
  product,
  ...(category && {category}),
};

It may seem like a complex expression, but after understanding each part, it becomes a straightforward one-liner. What we are doing is taking advantage of the && operator.

The AND operator is mostly used in conditional statements to say,

If A and B are true, then do this.

But at its core, it evaluates two expressions from left to right, returning the expression on the left if it is falsy and the expression on the right if they are both truthy. So, in our prior example, the AND operator has two cases:

  1. category is undefined (or falsy);
  2. category is defined.

In the first case where it is falsy, the operator returns the expression on the left, category. If we plug category inside the rest of the object, it evaluates this way:

const options = {
  limit: 10,

  product,

  ...category,
};

And if we try to destructure any falsy value inside an object, they will be destructured into nothing:

const options = {
  limit: 10,
  product,
};

In the second case, since the operator is truthy, it returns the expression on the right, {category}. When plugged into the object, it evaluates this way:

const options = {
  limit: 10,
  product,
  ...{category},
};

And since category is defined, it is destructured into a normal property:

const options = {
  limit: 10,
  product,
  category,
};

Put it all together, and we get the following betterFind() function:

const betterFind = (product, category) => {
  const options = {
    limit: 10,
    product,
    ...(category && {category}),
  };

  console.log(options);

  // Find in a database...
};

betterFind("sofas");

And if we don’t specify any category, it simply does not appear in the final options object.

{limit: 10, product: 'sofas'}
Contestant E: “I Used JSON.stringify And JSON.parse.”

Surprisingly to me, there is a way to remove a property by reassigning it to undefined. The following code does exactly that:

let monitor = {
  size: 24,
  screen: "OLED",
};

monitor.screen = undefined;

monitor = JSON.parse(JSON.stringify(monitor));

console.log(monitor); // {size: 24}

I sort of lied to you since we are employing some JSON shenanigans to pull off this trick, but we can learn something useful and interesting from them.

Even though JSON takes direct inspiration from JavaScript, it differs in that it has a strongly typed syntax. It doesn’t allow functions or undefined values, so using JSON.stringify() will omit all non-valid values during conversion, resulting in JSON text without the undefined properties. From there, we can parse the JSON text back to a JavaScript object using the JSON.parse() method.

It’s important to know the limitations of this approach. For example, JSON.stringify() skips functions and throws an error if either a circular reference (i.e., a property is referencing its parent object) or a BigInt value is found.

Contestant F: “We Rely On Lodash At My Company.”

It’s worth noting that utility libraries such as Lodash.js, Underscore.js, or Ramda also provide methods to delete — or pick() — properties from an object. We won’t go through different examples for each library since their documentation already does an excellent job of that.

Conclusion

Back to our initial scenario, which contestant is right?

The answer: All of them! Well, except for the first contestant. Setting a property to undefined just isn’t an approach we want to consider for removing a property from an object, given all of the other ways we have to go about it.

Like most things in development, the most “correct” approach depends on the situation. But what’s interesting is that behind each approach is a lesson about the very nature of JavaScript. Understanding all the ways to delete a property in JavaScript can teach us fundamental aspects of programming and JavaScript, such as memory management, garbage collection, proxies, JSON, and object mutation. That’s quite a bit of learning for something seemingly so boring and trivial!

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[A Roundup Of WCAG 2.2 Explainers]]> https://smashingmagazine.com/2023/10/roundup-wcag-explainers/ https://smashingmagazine.com/2023/10/roundup-wcag-explainers/ Fri, 20 Oct 2023 13:00:00 GMT WCAG 2.2 is officially the latest version of the Web Content Accessibility Guidelines now that it has become a “W3C Recommended” web standard as of October 5.

The changes between WCAG 2.1 and 2.2 are nicely summed up in “What’s New in WCAG 2.2”:

“WCAG 2.2 provides 9 additional success criteria since WCAG 2.1. [...] The 2.0 and 2.1 success criteria are essentially the same in 2.2, with one exception: 4.1.1 Parsing is obsolete and removed from WCAG 2.2.”

This article is not a deep look at the changes, what they mean, and how to conform to them. Plenty of other people have done a wonderful job of that already. So, rather than add to the pile, let’s round up what has already been written and learn from those who keep a close pulse on the WCAG beat.

There are countless articles and posts about WCAG 2.2 written ahead of the formal W3C recommendation. The following links were selected because they were either published or updated after the announcement and reflect the most current information at the time of this writing. It’s also worth mentioning that we’re providing these links purely for reference — by no means are they sponsored, nor do they endorse a particular person, company, or product.

The best place for information on WCAG standards will always be the guidelines themselves, but we hope you enjoy what others are saying about them as well.

Hidde de Vries: What’s New In WCAG 2.2?

Hidde is a former W3C staffer, and he originally published this WCAG 2.2 overview last year when a draft of the guidelines was released, updating his post immediately when the guidelines became a recommendation.

Patrick Lauke: What’s New In WCAG 2.2

Patrick is a current WCAG member and contributor, also serving as Principal Accessibility Specialist at TetraLogical, which itself is also a W3C member.

This overview goes deeper than most, reporting not only what is new in WCAG 2.2 but how to conform to those standards, including specific examples with excellent visuals.

James Edwards: New Success Criteria In WCAG 2.2

James is a seasoned accessibility consultant with TPGi, a provider of end-to-end accessibility services and products.

Like Patrick, James gets into thorough and detailed information about WCAG 2.2 and how to meet the updated standards. Watch for little asides strewn throughout the post that provide even further context on why the changes were needed and how they were developed.

GOV.UK: Understanding WCAG 2.2

It’s always interesting to see how large organizations approach standards, and governments are no exception because they have a mandate to meet accessibility requirements. GOV.UK published an addendum on WCAG 2.2 updates to its Service Manual.

Notice how the emphasis is on the impact the new guidelines have on specific impairments, as well as ample examples of what it looks like to meet the standards. Equally impressive is the documented measured approach GOV.UK takes, including a goal to be fully compliant by October 2024 while maintaining WCAG 2.1 AA compliance in the meantime.

Deque Systems: Deque Systems Welcomes and Announces Support for WCAG 2.2

Despite being more of a press release, this brief overview has a nice clean table that outlines the new standards and how they align with those who stand to benefit most from them.

Kate Kalcevich: WCAG 2.2: What Changes for Websites and How Does It Impact Users?

Kate really digs into the benefits that users get with WCAG 2.2 compliance. Photos of Kate’s colleague, Samuel Proulx, don’t provide new context but are a nice touch for remembering that the updated guidelines are designed to help real people, a point that is emphasized in the conclusion:

“[W]hen thinking about accessibility beyond compliance, it becomes clear that the latest W3C guidelines are just variations on a theme. The theme is removing barriers and making access possible for everyone.”
— Kate Kalcevich

Level Access: WCAG 2.2 AA Summary and Checklist for Website Owners

Finally, we’ve reached the first checklist! That may be in name only, as this is less of a checklist of tasks than it is a high-level overview of the latest changes. There is, however, a link to download “the must-have WCAG checklist,” but you will need to hand over your name and email address in exchange.

Chris Pycroft: WCAG 2.2 Is Here

While this is more of an announcement than a guide, there is plenty of useful information in there. The reason I’m linking it up is the “WCAG 2.2 Map” PDF that Chris includes in it. It’d be great if there was a web version of it, but I’ll take it either way! The map neatly outlines the success criteria by branching them off the four core WCAG principles.

Shira Blank and Joshua Stein: After More Than a Year of Delays, It Is Time to Officially Welcome WCAG 2.2

This is a nice overview. Nothing more, nothing less. It does include a note that WCAG 2.2 is slated to be the last WCAG 2 update between now and WCAG 3, which apparently is codenamed “Silver”? Nice.

Nathan Schmidt: Demystifying WCAG 2.2

True to its title, this overview nicely explains WCAG 2.2 updates devoid of complex technical jargon. What makes it worth including in this collection, however, are the visuals that help drive home the points.

Craig Abbott: WCAG 2.2 And What It Means For You

Craig’s write-up is a lot like the others in that it’s a high-level overview of changes paired with advice for complying with them. But Craig has a knack for discussing the changes in a way that’s super approachable and even reads like a friendly conversation. There are personal anecdotes peppered throughout the post, including Craig’s own views of the standards themselves.

“I personally feel like the new criteria for Focus Appearance could have been braver and removed some of the ambiguity around what is already often an accessibility issue.”
— Craig Abbott

Dennis Lembrée: WCAG 2.2 Checklist With Filter And Links

Dennis published a quick post on his Web Axe blog reporting on WCAG 2.2, but it’s this CodePen demo he put together that’s the real gem.

See the Pen WCAG 2.2 Checklist with Filter and Links [forked] by Web Overhauls.

It’s a legit checklist of WCAG 2.0 requirements you can filter by release, including the new WCAG 2.2 changes and which chapter of the specifications they align to.

Jason Taylor: WCAG 2.2 Is Here! What It Means For Your Business

Yet another explainer, this time from Jason Taylor at UsableNet. You’ll find a lot of cross-over between this and the others in this roundup, but it’s always good to read about the changes with someone else’s words and perspectives.

Wrapping Up

There are many, many WCAG 2.2 explainers floating around — many more than what’s included in this little roundup. The number of changes introduced in the updated guidelines is surprisingly small, considering WCAG 2.1 was adopted in 2018, but that doesn’t make them any less impactful. So, yes, you’re going to see plenty of overlapping information between explainers. The nuances between them, though, are what makes them valuable, and each one has something worth taking with you.

And we’re likely to see even more explainers pop up! If you know of one that really should be included in this roundup, please do link it up in the comments to share with the rest of us.

]]>
hello@smashingmagazine.com (Geoff Graham)