Videos are a popular form of content on websites because of how user-friendly and easy to digest they can be. They can also be really useful assets if you’re trying to get your webpages to rank better.
In this post, we’re going to review eight tips that will help you optimize your on-page videos for search results.
You can find videos in all kinds of places on websites. For example:
Not every video is going to be rank-worthy. For instance, a slow-motion video hero doesn’t need to be optimized for search. So when implementing the tips below, focus on the videos that users would actually seek out in Google video search results.
There are a couple of things to consider here.
First is what video files do to page speeds. The longer and bigger the video file, the more resources it’s going to consume on your server. And the more videos you include on the page, the worse it’s going to get.
If you’d prefer to host your own video assets, then give it a try. But be sure to run the webpage through PageSpeed Insights after you publish it.
If the page takes more than three seconds to load, consider converting your video file into WebM. You can use an online converter like CloudConvert to do this.
If the self-hosted video option just isn’t working out, another option is to upload your file to YouTube or Vimeo and then embed it into the page. This will help keep the page loading quickly.
Another reason you might want to go this route—with YouTube specifically—is for the link juice.
Similar to how backlinks work, an embedded YouTube video would allow your page to borrow the credibility and authority (i.e., the link juice) from the source. When I refer to “source” here, I’m not talking about YouTube. I mean the video itself.
If you have a video that’s already getting a ton of views and engagement on YouTube, then it could be helpful to link to and directly embed it on your site. There’s no confirmation from Google that this will directly impact your webpage’s ranking. However, the technique is almost identical to backlinking, so it should be just as effective.
I’m not going to go into depth on this point since there’s an entire post devoted to video accessibility and covers these four tips:
Accessibility is one of the main contributing factors to a website’s rank these days. Run any site through PageSpeed Insights and you’ll see what I mean. You’ll receive an accessibility SEO score and tips.
So ensuring that your videos are accessible is critical.
Google Keyword Planner is a useful tool for finding the right keywords for webpages. However, I wouldn’t rely solely on it when it comes to video content.
What I’d suggest instead is do a search in Google Videos for the keyword you want to target. You’ll get a good idea of what kinds of search terms and metadata to use in order to rank among the top video results.
For example, let’s say I’m writing a listicle about “top halloween party ideas.” Google will autosuggest popular and relevant keywords as I type the search term out:
Review the autosuggested results along with the wording used in the top page results. I’d also recommend scrolling to the bottom of the page and checking out Google’s “Related searches” section. On this page, there are three search terms listed:
All of this information from Google will give you a good idea of which focus keyword to use. It’ll also help you come up with a meta title and description that best aligns with the users’ intent and draws them in.
Google bots aren’t able to watch your video. And unless you’ve uploaded it to YouTube and it’s already performing well there, Google doesn’t have much to draw on in terms of the content within it.
So you need to feed the search engines with data it can understand. Some of the surrounding on-page content will help as will your metadata. However, schema markup is a must.
Schema markup does a couple of things. First, it helps Google understand what kind of content is on the page. It also gives important details about the content that will help Google users make up their minds about it. What these details are depends on the type of content you’ve created.
For example, I did a search for “anna vocino videos” and found these two results from her website:
Typically, search results display the following metadata:
Because these pages are set up as video pages, they appear as rich snippets with additional information visible for:
This is possible because of the structured data added to the backend of the page. There’s a lot more information that could be added here depending on the type of video you have. For instance, some recipe sites allow users to rate the recipe. In that case, a star rating could appear below the search listing.
If you’re curious to see the different possibilities for video markup, check out the VideoObject page on Schema.org. The more you make use of structured data, the greater the chances your video page will appear as a rich snippet in main search results (in addition to the Video results page).
One of the reasons why one webpage ranks better than another is because of the way its listing appears.
Many times, users don’t scroll past the first batch of search results. Some of them don’t even look past the first few organic listings. They consider the most popular and relevant results within a matter of seconds and then make a choice which one or ones to visit.
So in addition to nailing your search metadata, the video thumbnail graphic you use really needs to grab users’ attention and to keep them from considering the other top-ranked videos.
Let me demonstrate why this matters. I just went to Google and did a search for “how to make dog kibble at home.” Here’s what I found at the top of the Video results page:
Although Allrecipes is a website I’ve used before and trust, that video thumbnail doesn’t look as appetizing as the rest. The first and fourth ones are great because they have custom, branded titles. This gives them a professional polish. The third one also stands out because of how vibrant and well-positioned the food is in the photo.
When it comes to creating the thumbnail, it might be easier to just use a screenshot from within the video. However, by taking time to create something custom and polished, it’ll gain more attention in video search results.
I’d personally start with #3, then try #1 next (since it’s on a website vs. YouTube), then #2 (since I know the brand), and then #4. Get to know your target users and their habits when it comes to choosing content and brands to spend their time on. This will help you craft the perfect thumbnail as well as search metadata to pull them in.
The way you mark up your pages containing video can certainly help Google crawl and rank them accordingly. If you want to ensure that Google doesn’t miss any of these vital assets, create a separate XML sitemap for your video files.
Note: This option is only available if your videos have been uploaded to your server and have their own URLs. Otherwise, you’ll have to leave it up to Google to detect embedded videos from YouTube, Vimeo or other sources.
Once you’ve created your video sitemap, log into your account in Google Search Console and go to Sitemaps. This is where you’ll upload your video XML sitemap.
After a couple weeks, go back into Google Search Console and open the Indexing > Video pages section:
This page will tell you how many video pages Google has identified. As you can see here, I only have one video page. I didn’t create a sitemap for it since it contains an embedded YouTube video. Nevertheless, Google was able to find, index and rank it for me.
Check to make sure that all of your video pages appear here after you’ve uploaded your XML sitemap.
If any are missing, go back to the Sitemaps page and open your video sitemap. Google tells you here how many pages and videos were discovered. If you click on the “See Video Indexing,” it’ll tell you which ones have been indexed and which haven’t. For the ones Google hasn’t touched yet, you can submit a “fix” to prioritize that task.
An optimized video isn’t a guaranteed pathway to ranking at the top of search results. All it does is help the webpage it appears on to get there—and that’s only if the page itself is well-written, well-designed, user-friendly and valuable.
So rather than spend your time trying to add as many videos to your website as possible, focus on placing videos on only the most important pages of your site.
In addition, try not to include more than one video per page. If it’s a video resources or archives page, that’s fine. However, when it comes to actual content pages, one video should suffice.
If you have to include more—say if it’s a tutorial that contains descriptive videos throughout—just make sure that the most important video appears first in the sequence. Google has a tendency to crawl just the top parts of a page for information and content. That’s why your first photo, video and link are more likely to rank than the ones that appear lower on the page.
One more tip I have is this: Never include the same video on more than one page. It might be a really great video that explains your company’s complex process or product. However, that video will cannibalize itself in search results. In other words, if your homepage has the same video as your Process page, they could end up canceling each other out in relevant results.
When laying out a page with a video on it, add it with care.
As I mentioned in the last tip, the video should be closer to the top if there are others on the page. If it’s the only video on the page, then it should still be close to the top. It makes it easier for crawling bots to find it as well as your visitors.
So on the homepage, that would be the hero section. On a landing page, that would be above the fold as well. On a product page, it would be included within the main image gallery or perhaps under a Description accordion. For other pages like blog posts and internal service pages, try to get it as close to the introduction and before the first heading tag as possible.
In addition to choosing the right placement, make sure the video stands out. If possible, make it span the full-width of the content area. Consider framing it in an eye-catching color. Also, make sure the cover image/thumbnail looks great. It can be the same one you used for search results or it can be a custom one for the page.
While this post focuses on on-page video SEO, videos shouldn’t be included on webpages simply to boost ranking. Videos should always serve a purpose and help you achieve your overall aims with the website.
From visual storytelling to edification, your users should get something out of every video they encounter. If they don’t, you’re doing nothing more than draining server resources and wasting your own time optimizing that video content.
So be mindful when using and optimizing videos. They can be an incredibly effective asset in attracting visitors to your site and then converting those leads into something more, if done right.
]]>GraphQL is a query language for making requests to APIs. With GraphQL, the client tells the server exactly what it needs and the server responds with the data that has been requested. In an earlier article, we went through an exercise to create a GraphQL server API with Apollo Server in Node.js.
In today’s article, we’ll spend some time seeing how we can get started with using GraphQL on the client and we’ll use React, a widely adopted library for building user interfaces that couples well with GraphQL.
In the realm of GraphQL, the client serves as the medium through which we interact with our GraphQL server. In addition to sending queries/mutations and receiving responses from the server, the client is also responsible for managing the cache, optimizing requests and updating the UI.
Though we can make a GraphQL HTTP request with a simple POST command, using a specialized GraphQL client library can make the development experience much easier by providing features and optimizations like caching, data synchronization, error handling and more.
Many popular GraphQL clients exist in the developer ecosystem today such as Apollo Client, URQL, Relay and React Query. In this article, we’ll leverage React Query as our GraphQL client library.
Assuming we have a running React application, we can begin by installing the @tanstack/react-query
, graphql-request
and the graphql
packages.
npm install @tanstack/react-query graphql-request graphql
@tanstack/react-query
is the React Query library we’ll use to make queries and mutations. The graphql-request
and graphql
libraries will allow us to make our request functions to our GraphQL server and provide the necessary utilities to parse our GraphQL queries.
To begin using React Query utilities within our app, we’ll need to first set up a QueryClient
and wrap our application’s root component within a QueryClientProvider
. This will enable all the child components of App
to access the QueryClient
instance and, therefore, be able to use React Query’s hooks and functionalities.
In our root index file where the parent <App />
component is being rendered, we’ll import QueryClient
and QueryClientProvider
from the tanstack/react-query
library.
import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
QueryClient
is responsible for executing queries and managing their results and state, while QueryClientProvider
is a React context provider that allows us to pass the QueryClient
down our component tree.
We’ll then create a new instance of QueryClient
and pass it down as the value of the client
prop of QueryClientProvider
that we’ll wrap the root App
component with.
import { StrictMode } from "react";
import { createRoot } from "react-dom/client";
import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
import App from "./App";
// create the query client instance
const queryClient = new QueryClient();
const rootElement = document.getElementById("root");
const root = createRoot(rootElement);
// pass our query client down our app component tree
root.render(
<QueryClientProvider client={queryClient}>
<App />
</QueryClientProvider>
);
The main composition function, provided to us from React Query, to execute GraphQL queries is the useQuery function.
The useQuery()
hook takes a unique key and an asynchronous function that resolves the data returned from the API or throws an error. To see this in action, we’ll attempt to make a GraphQL query to the publicly accessible Star Wars GraphQL API.
We’ll create a component named App
and utilize the useQuery()
hook within it to retrieve a list of films from the allFilms
query field. First, we’ll construct the GraphQL query as follows:
import { gql } from "graphql-request";
const allFilms = gql/* GraphQL */ `
query allFilms($first: Int!) {
allFilms(first: $first) {
edges {
node {
id
title
}
}
}
}
`;
The GraphQL document above defines a query named allFilms
, which accepts a variable labeled $first
that limits the number of films retrieved from the API.
In the component function, we’ll leverage the useQuery()
hook to initialize the GraphQL request when the component mounts. We’ll supply a unique key for the query, 'fetchFilms'
, and an asynchronous function that triggers a request to the GraphQL endpoint.
import { gql, request } from "graphql-request";
import { useQuery } from "@tanstack/react-query";
const allFilms = gql/* GraphQL */ `
...
`;
const App = () => {
const { data } = useQuery({
queryKey: ["fetchFilms"],
queryFn: async () =>
request(
"https://swapi-graphql.netlify.app/.netlify/functions/index",
allFilms,
{ first: 10 }
),
});
};
In the above code snippet, the useQuery()
hook is triggered when the App
component mounts, invoking the GraphQL query with the specified unique key and variables.
The useQuery()
hook returns a result
object that contains various properties that represent the state and outcome of the query. In addition to containing the data
fetched from the query when successful, the result
object also contains isLoading
and isError
values. isLoading
tracks the loading status of the request and isError
helps notify us when an error has occurred during the request.
With the isLoading
and isError
values, we can have the component render different elements depending on the state of our GraphQL request.
import { gql, request } from "graphql-request";
import { useQuery } from "@tanstack/react-query";
const allFilms = gql/* GraphQL */ `
query allFilms($first: Int!) {
allFilms(first: $first) {
edges {
node {
id
title
}
}
}
}
`;
const App = () => {
const { data, isLoading, isError } = useQuery({
queryKey: ["fetchFilms"],
queryFn: async () =>
request(
"https://swapi-graphql.netlify.app/.netlify/functions/index",
allFilms,
{ first: 10 }
),
});
if (isLoading) {
return <p>Request is loading!</p>;
}
if (isError) {
return <p>Request has failed :(!</p>;
}
return (
<ul>
{data.allFilms.edges.map(({ node: { id, title } }) => (
<li key={id}>
<h2>{title}</h2>
</li>
))}
</ul>
);
};
When our request is in-flight, the component will render a loading message.
If the request was to ever error, the component will render an error message.
Finally, if the request is not in-flight, no errors exist, and data is available from our request, we’ll have the component render the final intended output—a list of Star Wars films fetched from the API.
Test the above in this Codesandbox link.
React Query provides a useMutation() function to allow mutations to be conducted from React components. Unlike queries, mutations are used to create, update or delete data on the server or otherwise perform server side effects.
Like the useQuery()
function, the useMutation()
function receives an asynchronous function that returns a promise. The publicly accessible Star Wars GraphQL API we’re using doesn’t have root mutation fields for us to use but we’ll assume a mutation, called addFilm
, exists that allows us to add a new film to the list of films saved in our database.
import { gql, request } from "graphql-request";
import { useMutation } from "@tanstack/react-query";
const addFilm = gql`
mutation addFilm($title: String!, $releaseDate: String!) {
addFilm(title: $title, releaseDate: $releaseDate) {
id
title
releaseDate
}
}
`;
The addFilm
mutation will accept title
and releaseDate
as variables and when successful will return the id
of the newly created film, along with the title
and releaseDate
that were passed in.
In the component function, we’ll leverage the useMutation()
hook to help trigger the mutation when a button is clicked. We’ll call useMutation()
and supply the asynchronous function that triggers the GraphQL mutation request.
import { gql, request } from "graphql-request";
import { useMutation } from "@tanstack/react-query";
const addFilm = gql`
...
`;
const SomeComponent = () => {
const mutation = useMutation({
mutationFn: async (newFilm) =>
request(
"https://swapi-graphql.netlify.app/.netlify/functions/index",
addFilm,
newFilm
),
});
};
The useMutation()
hook returns a mutation
object that contains details about the mutation request (isLoading
, isError
, etc.). It also contains a mutate()
function that can be used anywhere in our component to trigger the mutation.
We’ll have a button trigger the mutate()
function when clicked. Additionally, we can display some messaging to the user whenever the mutation request is either in flight or has errored.
import { gql, request } from "graphql-request";
import { useMutation } from "@tanstack/react-query";
const addFilm = gql`
mutation addFilm($title: String!, $releaseDate: String!) {
addFilm(title: $title, releaseDate: $releaseDate) {
id
title
releaseDate
}
}
`;
const SomeComponent = () => {
const { mutate, isLoading, isError } = useMutation({
mutationFn: async (newFilm) =>
request(
"https://swapi-graphql.netlify.app/.netlify/functions/index",
addFilm,
newFilm
),
});
const onAddFilmClick = () => {
mutate({
title: "A New Hope",
releaseDate: "1977-05-25",
});
};
return (
<div>
<button onClick={onAddFilmClick}>Add Film</button>
{isLoading && <p>Adding film...</p>}
{isError && <p>Uh oh, something went wrong. Try again shortly!</p>}
</div>
);
};
This essentially summarizes the fundamentals of initiating projects with React and GraphQL. By using a GraphQL client library, we can leverage hooks and utilities to conduct GraphQL queries and mutations efficiently. The information fetched or manipulated using these queries and mutations allows us to display varied UI components and elements which ensures the application is dynamically responsive to user interactions and data alterations.
In this article, we explored the basics of integrating GraphQL with a React application, utilizing React Query as our client library. We wrapped our application in a QueryClientProvider
to make use of React Query functionalities in our application component tree and proceeded to make GraphQL queries and mutations using the useQuery()
and useMutation()
hooks, respectively.
Understanding the principles of GraphQL and how it integrates with React is important since it can sometimes offer a more efficient alternative to REST when dealing with APIs. By leveraging libraries like React Query, we can also significantly simplify the process of fetching, synchronizing, and managing the state of GraphQL server-side data in their React applications.
We’ll continue discussing GraphQL and React with some follow-up articles soon. Stay tuned!
]]>The immense growth of ecommerce wouldn’t be possible without payment processing technology. Can you imagine having to process each online customer transaction using an old school point of sale system or credit card processor?
But just like other technologies we use when building websites and apps, there are lots of options to choose from. So which one should you use when setting up your next ecommerce site? You could always opt for PayPal or Stripe since they’re the most popular. But are they really the best choice?
In this post, we’ll take a look at the 14 traits to consider when choosing a payment processor and then go over various options.
The payment processor you integrate with your shop can seriously impact the user experience and, consequently, the conversion rate and sales generated on your ecommerce site.
Here are some of the most popular payment processing options today:
And here are some payment processors that are rising in popularity. Most of them have positive ratings on Trustpilot, by the way:
Before you settle on one, ask yourself the following questions as you consider the options:
Whenever you build a website or app, you have a checklist of security measures to implement before it can launch. Even though you’re not building the payment processing technology, you still have to make sure it adheres to those strict protocols (and more).
The easiest way to do this is by finding a payment processor that is PCI DSS compliant. This means the software has been validated against 12 security protocol requirements. These include things like:
If a security breach were to take place at checkout, you won’t be able to pass the blame onto the payment processor. You might be able to internally, but definitely not with your customers. If it took place on your site or in your app, then they’re going to hold you responsible.
So choosing a payment processor that prioritizes security and is PCI DSS compliant is a must.
Another way to secure the payment gateway is with fraud protection. There are different ways you’ll see this implemented in a payment gateway:
Card Verification Value (CVV) is that three- or four-digit numeric code that appears on the back or front of a credit card. This isn’t a foolproof way to prevent fraud since someone could have stolen the actual credit card. However, for a fraudster who only has the credit card number, they won’t be able to complete the transaction without the verification code.
Address Verification Service (AVS) is when the processor requires the customer to fill in their billing address as well as their ZIP code. If it doesn’t match the data the credit card company or bank has on file for the customer, the transaction is rejected.
3D Secure is another way that payment processors authenticate that the customer is who they say they are. It’s basically two-factor authentication that takes place during checkout. After the customer logs in or enters their payment details, they’re sent a verification code either via SMS or email.
There are other fraud protection measures a payment processor might use, but these are the most common ones.
There isn’t a whole lot of data on the fastest payment processors online, so before officially committing to one, give it a try for yourself. There are a few things to look out for when it comes to speed.
The first is how quickly the cart and checkout pages load. Just as with the rest of your website, it shouldn’t take more than a couple seconds.
Also check to see how fast and easy the checkout form is to use. When customers use the tab key to move from field to field, it should happen instantaneously. No fields should be skipped either.
Lastly is how quickly and smoothly the processing happens. The last thing you want is for customers to fill everything out, hit the “Submit” or “Purchase” key, only to not know if the transaction is processing or how long it will take.
The flow from your website to checkout should be as seamless as possible. If you’ve built a responsive and mobile-first website, then you’ll want a payment gateway that looks and feels the same way.
Again, I’d recommend giving the payment gateway a try for yourself. While they might claim it’s responsive, there might be certain aspects of it that don’t feel right on smaller screens.
For instance, I recently was checking out on my smartphone and couldn’t get to the final ZIP code field for some reason. I wasn’t able to scroll down to it nor could I use the tab arrow to get me there. I ended up having to turn my computer on and complete my purchase that way.
Some payment processors give you their out-of-the-box software and say, “Here ya go!” And that’s it. You’ll of course be able to choose which payment options to display. But what about when it comes to the structure of the checkout page or the form fields?
No out-of-the-box solution will be okay for every customer. Instead, you want to find a payment gateway that you can customize. In addition to modifying the payment methods, the layout of the page, the form fields and other features, you should be able to brand and edit your ecommerce emails.
If you’ve designed a comfortable, convenient and streamlined experience for customers up to this point, don’t let it fall apart with an email branded with the payment processor’s name. Or with a form that’s too long to fill out and with irrelevant fields. Or with a process that doesn’t allow guests to checkout.
You know your users better than anyone. Use software that enables you to design this last step in the process just for them.
Another way your payment gateway might disrupt the checkout experience is if you send customers to a different website to complete their transaction. If the payment gateway is super recognizable, your shoppers might be OK with checking out there. If it’s not, you could see an increase in user abandonment rates.
So take a look at how your payment processor handles this.
Is there a non-hosted option where the payment gateway is integrated directly into your website? This will create the most seamless experience for your shoppers as they’ll stay right where they are to pay.
If there is only a hosted option, what does it look like—is it an embedded iFrame on your site or does it take users to a new domain? Will customers find the processor’s branding instead of your own? Will the look and feel of it completely conflict with the site or app you built?
If you don’t think it will cause too much friction, then this option might work.
There are a couple of things to look for here. For starters, you’ll want to make sure the payment processor allows you to sell physical products, digital products, services or whatever it is you sell.
Also look to see if they support recurring payments, memberships and auto-renew product sales as well. If you plan on generating recurring revenue, you’ll need a processor that makes it easy to do so.
Different stores and brands have different needs. For example, if you’re building an ecommerce site for a brick-and-mortar store, it would be nice to find a payment processor that offers different technologies, like an online payment gateway as well as physical point-of-sale systems.
It’s not just the types of payment processing you should consider either. Think about what sorts of tools and features will help you or your client better manage their orders and payments. For example, you might need one that can handle chargebacks and refunds with ease.
The first piece of software your payment gateway needs to integrate with is your content management software. I don’t think you’ll have much of a problem with that. However, I would check to see if there’s a direct connection between your CMS and the gateway. If not, review the integration process to make sure it’s not overly complicated. You want there to be a strong and stable connection between the two.
Then figure out what other software needs to integrate with your payment gateway. For example:
By integrating other apps with your payment gateway, you can streamline the flow of payment and order information to other aspects of the business that need it.
It doesn’t happen as much nowadays, but I remember in the past when I’d have issues trying to pay with an American Express card on some websites and apps. It probably wasn’t the vendor’s choice. They most likely were using a payment processor that didn’t support those cards.
So that’s something to look up if your client or employer wants certain credit cards to be accepted.
That’s not all. You’ll want to dig deeper into what other payment methods are offered. For example, you might find that they work with Alipay and PayPal. If you want to accept ACH Direct Debit, that’s something else to look into.
One other thing to consider is if they accept mobile wallet payments. While these payments are commonly made in store, websites and apps should be able to accept them as well. If you want your shop to access more revenue opportunities, choosing a payment processor that works with Apple Pay or Google Pay is a good idea.
Check to see which countries and currencies are supported. This list will first tell you if you’re even eligible to use this service. Not every payment processor is everywhere. PayPal, for instance, has a blacklist of countries it won’t do business with either because the market isn’t big enough or they have problems with fraud.
This list will also tell you what sort of market penetration you can expect. If you’re building an ecommerce site that’s meant to be for global shoppers, you’ll need a payment method that enables the greatest number of customers to buy from it.
Before you go scratching a payment processor off of your list, though, take some time to really get to know your target users. They might not even live in the areas where you’re unable to take payments from.
Payment processors aren’t free to use. Some are free to set up while others you have to pay to use. On top of that, there are other fees to consider, like:
Your fee structure may also differ depending on the volume of sales your site or app does. So pay close attention to that if you expect your store to scale rapidly.
Once you’ve settled on some payment processor options, do a search for the company’s name plus “fees.” You’ll find a page that lists out all the nitty-gritty details you need to know.
Every company is going to tell you that they offer fast and helpful customer support. If you want to know the truth about what it’s like to use support when you need it, go to Trustpilot and do a search for the company’s name.
When researching this article, I found that some of the most popular solutions (with the exception of Stripe) had abysmal ratings and reviews when it came to customer service. The newer and lesser known options interestingly enough had much fewer complaints.
So if that matters to you, do your research ahead of time.
Some payment processors impose minimum and maximum limits. One reason why this matters is because you won’t want to get stuck paying a steep minimum fee when your site or app isn’t generating any sales.
Another reason is because a maximum limit can hamper your ability to scale your business and its sales. You need a payment gateway that works well—even when there’s tons of traffic flowing through it—along with one that works no matter how much you sell through it.
There are two aspects of payment processing to focus on as you go through the decision-making process. First, is the payment processor company trustworthy? Second, is the payment gateway software reliable and can it do everything you need it to do?
The last thing you want to do is to settle on a payment processor, only to find that it keeps you from selling to a specific market or it takes too big of a chunk out of each sale. So once you find a few solutions you like the look and sound of, take them through the 14 questions above and see how they pan out. That should help you find the right one for the digital product you’ve built.
This blog was prepared by Suzanne Scacca in their personal capacity. The opinions or representations expressed herein are the author’s own and do not necessarily reflect the views of Progress Software Corporation, or any of its affiliates or subsidiaries. All liability with respect to actions taken or not taken based on the contents of this blog are hereby expressly disclaimed. The content on this posting is provided “as is” with no representations made that the content is error-free.
]]>In programming, type safety is crucial for preventing runtime errors and better supporting your code’s robustness. One way to achieve this in JavaScript and TypeScript is by utilizing Zod, a powerful library designed for schema validation and parsing. This article will introduce you to the benefits of making your code type-safe and how you can implement Zod to enhance the quality and reliability of your projects.
Zod offers a straightforward, declarative syntax that lets you set strict data validation rules for various data types. By adopting Zod in your development process, you can catch potential type-related issues early on, reducing debugging time and promoting a more maintainable codebase.
Getting started with Zod is simple, as it seamlessly integrates with existing TypeScript and JavaScript projects. As you explore the various features and possibilities that Zod provides, you’ll discover how it can help improve your code’s type safety, ultimately leading to better software and a smoother development experience.
Type safety is an essential aspect of programming that helps you minimize errors and enhance the maintainability of your code. Ensuring that you use the correct data types in your application can catch potential problems early in the development process. This is where Zod comes into play. This library allows you to create type-safe code efficiently.
When working with type-safe code, you benefit from the knowledge that the data types you expect are the ones you are receiving. This reduces the need for excessive data validation and promotes a clean, streamlined architecture. Zod makes this even more accessible by providing expressive and powerful schemas that can easily be incorporated into your projects.
As a practical example, imagine you are working with user-submitted data. With Zod, you can define a schema that maps all the different fields from the data to their respective types and requirements. Once you’ve created the schema, Zod ensures all user data adheres to the defined rules, helping you catch bugs and potential issues before they become critical.
Using Zod and focusing on type safety makes your code more understandable, reliable and easier to maintain. You will also appreciate the improved debugging experience and the confidence gained by knowing your code is less prone to unexpected errors. So, with Zod at your side, you are on your way to creating safer and more reliable code.
To get started, you need to install Zod into your project by running one of the commands:
npm install zod
yarn add zod
Once installed, you can start creating schemas to define the data structure and types you expect, like this:
import { z } from "zod";
const userSchema = z.object({
name: z.string(),
age: z.number().min(0),
country: z.string().optional(),
});
In this schema, you define a person object with three properties: name (a required string), age (a required positive number) and country (an optional string). You can use Zod’s built-in functions like min() and optional() to set constraints on the data.
Validating data using the defined schema is straightforward. You can use your schema’s parse() method to validate and parse the data. The data will be returned as a typed object if it passes validation. Otherwise, an error will be thrown:
try {
const validUser = userSchema.parse({
name: 'John Doe',
age: 30,
});
console.log(validUser); // { name: 'John Doe', age: 30 }
} catch (error) {
console.log(error.message);
}
Additionally, Zod works well with TypeScript, as you can leverage Zod’s infer
utility to create a type based on your schema. This enables strong typing and autocompletion
in your editor:
import { z } from "zod";
const userSchema = z.object({
name: z.string(),
age: z.number().min(0),
country: z.string().optional(),
});
type User = z.infer<typeof userSchema>;
With this schema defined, you can validate your user objects using the Zod’s parse method, for example:
// User object with correct data types
const validUser: User = {
name: "John",
age: 26,
country: "Brazil",
};
// This will pass without errors
UserSchema.parse(validUser);
// User object with incorrect data types
const invalidUser: User = {
name: "Jane",
age: "30", // age should be a number
country: "Brazil",
};
// This will throw a ZodError
UserSchema.parse(invalidUser);
Zod provides custom validation via refinements. It was designed to mirror TypeScript as closely as possible. But there are many so-called “refinement types” you may wish to check for that can’t be represented in TypeScript’s type system. For instance, you check whether a number is an integer or a string is a valid email address.
const myString = z.string().refine((val) => val.length <= 255, {
message: "String can't be more than 255 characters",
});
Zod also allows you to create custom validation rules using the refine method. This is useful when your data needs to meet specific conditions beyond the data type. For example, you can add a rule that requires the user’s age to be at least 18:
const UserSchema = z.object({
name: z.string(),
age: z.number().refine((value) => value >= 18, {
message: "Age must be at least 18",
}),
email: z.string().email(),
});
Sometimes, we want to compose many objects into a single one. We can do that with Zod, too. We can prevent duplication of code and the creation of complex code.
import * as z from "zod";
const professionSchema = z.object({
name: z.string(),
company: z.string(),
address: z.string(),
since: z.string()
});
const userSchema = z.object({
name: z.string(),
age: z.number().refine((value) => value >= 18, {
message: "Age must be at least 18"
}),
email: z.string().email(),
profession: professionSchema
});
type User = z.infer<typeof userSchema>;
const newUser: User = {
name: "John Doe",
age: 30,
email: "john@doe.com",
profession: {
name: "Software Engineer",
company: "Telerik",
address: "Street 101",
since: "2021"
}
};
console.log(newUser);
Adopting Zod into your coding practices offers numerous benefits for type-safety and reducing runtime errors. By leveraging Zod to validate and parse data, you can improve the integrity and consistency of your code, simplifying maintenance and enhancing efficiency.
To maximize the advantages of Zod, take the time to explore its rich feature set, including the support for custom validators, refinement and versatile error handling. The more you apply these tools, the more confident and secure your code will become.
Finally, remember that adopting best practices like using Zod for type safety enhances your code quality and contributes to the overall growth of your coding skills. Keep refining your techniques to stay ahead in the world of modern programming.
]]>I write this article as a deep admirer of Progress Telerik for its products, extraordinary graphical interfaces, high-performance Grids, document manipulation libraries, report editor, and an immense number of components for various languages and platforms, in addition to the quality of support and best of all: listening to its customers.
The fact is, here I am, now a Progress Champion writing for the Progress Telerik blog because I was a customer first—and I’m sharing from that experience. My admiration is technical because, before I met Telerik, I had already developed more than 200 components using VB 5/6 (ActiveX DLL) as a developer, consultant and CEO at Menphis, based in Brazil.
The image below is the face of my main product at the time, Advocati Desktop, before using Progress Telerik:
Main screen from Advocati Desktop
Sorry, but …
Meme from Internet/Disney movie
With all this experience, I grew professionally, reaching unprecedented know-how and know-why because, with Advocati Desktop, I gathered requirements, developed, implemented, offered support, and worked simultaneously as an IT Administrator using the system I had designed—a unique experience.
When Menphis started working with C# (WinForms and ASP.NET WebForms), we found in Telerik the same quality we sought to create when making our products.
In this way, the choice for Telerik was automatic because other competitors did not offer the same quality and diversity of products.
We built the web version of Advocati Desktop with Telerik UI for ASP.NET AJAX. The demo is accessible via the link on my digital card: https://jsmotta.com.br.
Customers’ CRUD
With the COVID-19 pandemic, I entered a new phase in my career. I started working for other companies as an employee or freelancer, providing services to large companies in Brazil and internationally.
In these companies where I worked and also in the consulting time of Menphis, I always observed that developers have this idea that they need to create their own components, just like me. However, the burden is enormous because always keeping up to date and working with the system for an extended period generates a technological debt that no one wants for themselves.
I also observed that large companies’ system developments could reduce their time drastically by switching to a professional component library. After all, instead of using RAD (rapid application development) tools with configuration, operation, performance and scalability like Telerik products, much time was wasted trying to solve behavioral problems of open-source (“free”) components, which end up being much more expensive because they do not have support or updates. An economy that is not at all intelligent!
I call on all Project Managers, CTOs, CIOs and those interested in technology to consider Telerik DevCraft, which, in addition to having a coherent and customizable look, also has the benefit of being easily updated to the most recent versions of programming languages/tools such as WinForms, WPF, React, Angular, Vue, jQuery, ASP.NET Core, Blazor, .NET MAUI, etc. The license is perpetual, and you get one year of free upgrades. Stay up to date with new versions and robust support by renewing your license, which comes discounted when done annually.
To reinforce my point, with Telerik, you save development time with easy-to-handle components that come with functional code, demos and forums; that deliver high performance; that are scalable, with unique features and attractive UIs; and, above all, that will add to your product’s or service’s credibility by using the same technology that Microsoft, IBM and NASA (among other giants) use globally.
Check all Telerik products at https://www.telerik.com/all-products
I encourage you to give it a try—download a free trial today.
]]>Virtually all modern frontend frameworks embrace the idea that a web application should be split into smaller reusable pieces called components, where each component is just a piece of UI, some logic and data required to render that piece. Furthermore, components may communicate with other components, giving birth to the idea that any web application is just a tree of components.
This guide provides a comparative analysis of the Angular framework and React library, focusing on their approaches to components. It’ll cover component creation, lifecycle, communication, and how these technologies allow components to detect and react to changes as users interact with the UI.
Angular is a TypeScript-first, full-fledged framework created by Google that allows you to create client or server-rendered enterprise-grade web applications at any scale. Some core concepts of this framework are components, services and directives, all of which are just classes annotated with @Compnent()
, @Injectable()
and @Directive()
decorators, respectively.
Components encapsulate the following:
Components may use services and directives. Services allow keeping related functionality in a separate class, and directives allow components to add or modify the behavior of their views.
Angular components rely heavily on dependency injection, enabling them to use the services they need to work. The Angular Injector provides services to components from their constructor or using the inject()
function.
Let’s create an Angular project. Start by installing the Angular CLI, a convenient tool for authoring Angular apps.
npm install -g @angular/cli
Next, create a project called my-angular-app
ng new my-angular-app
Follow the prompts and select all the default options from the CLI to create the angular project as shown below:
Our app uses Angular 17, the latest version of Angular at the time of writing.
To preview the running application, run npm start
.
React is a JavaScript library that creates reusable client and server-side components for web applications. It was created by the Meta team in 2013.
In React, components are plain JavaScript functions. The React team has the philosophy that JavaScript is in charge of the markup rendered on the screen, which is why functions that are React components hold the logic, the styles and the markup at once.
Although not compulsory and not coupled to the React library, the templating language commonly used for the markup is JSX, an HTML-like representation of UI elements and components.
Let’s create a basic client-side React app. Run the following command in your terminal to create a typescript-powered react project in a folder called my-react-app
.
npm create vite@latest my-react-app -- --template react-ts
Next, run the following commands to install all the project dependencies and preview the application in your browser.
cd my-react-app
npm install
npm run dev
To talk about components in both technologies, we will incrementally build a simple counter application like the one below.
Our counter app above will consist of three components: the app shell, the counter and a counter button. A representation of our app as a tree of components is shown below.
Our application logic is simple. The app renders some dummy text that includes the current time, a random image of a clock and the counter component, which allows the user to increment a number by interacting with the counter button. When the counter’s value is greater than 6, the button is disabled.
This simple app allows us to do some interesting things. To take a closer look at the component architecture of Angular and React, our focus will be on the following key areas:
Now, we will build the counter and counter button components and then discuss the following.
Let us create our root App component and render some basic markup.
Update the src/App.tsx
file with the following:
import "./app.css";
function App() {
return (
<div>
<h1>Hello World time is 2:46:48 </h1>
<img src="" alt="clock" />
</div>
);
}
React components are plain JavaScript functions that must begin with a capital letter; notice that the returned JSX is wrapped in a single parent element (a div in our case). React components must return a single root element. This idea is not React-specific—all functions in Javascript can only return a single thing.
For TypeScript to handle JSX templates correctly, the file name has to have a
.tsx
extension.
We also imported the styles for our component by importing an app.css
file.
In the src/app
folder, let’s create a component called app.component.ts
by running these commands:
cd src/app
touch app.component.ts
Update this file with the following:
@Component({
selector: 'app-root',
template: `
<h1>Hello World time is 2:46:48</h1>
<img src="https://www.telerik.com" alt="clock" srcset="" />
`,
styles: [
`
* {
display: block;
}
`,
],
standalone: true,
})
export class AppComponent implements {
}
As shown above, Angular components are classes decorated with the @Component()
decorator, which holds the options used to configure the component.
The template and style hold the component’s markup and style, respectively. Templates are HTML markup that may use the Angular templating language.
Templates may also be passed using a templateURL
prop holding a relative path to an HTML file. Also, styles can be passed using a styleURLs
prop, an array of relative paths pointing to CSS files for the component. For a given component, the style and styleURLs
options can be used at once, but for templates, you can only choose one template or templateURL
. When both are specified, the latter takes precedence.
The selector
property is a unique name for the component when it is rendered in another component, and ours is called app-root
.
Older versions of Angular required that, before components were used, they had to be registered in a Module (a class decorated with @NgModule()
) to configure it further. This is no longer necessary; the standalone prop makes our component self-contained to manage and configure all its dependencies. This is the recommended way to create components.
Let’s make our App.tsx
React component and the app.component.ts
Angular component visible in the browser. The key takeaway is that each technology renders components in an HTML file. This file is sent to the browser with the bundled JavaScript, CSS and other things the client requires and served by their respective build tools.
Update the main.tsx
file in your React project to match the following:
import ReactDOM from "react-dom/client";
import App from "./App.tsx";
ReactDOM.createRoot(document.getElementById("root")!).render(<App />);
The ReactDOM library enables rendering our React components to the browser using its createRoot
method. This method is called and fed an HTML DOM element. It receives an element with an id of root
in the index.html
file in our project root folder.
The render()
function then receives the <App/>
component and renders it to the browser.
Update the main.ts
file in a root folder to match the following:
import { bootstrapApplication } from "@angular/platform-browser";
import { appConfig } from "./app/app.config";
import { AppComponent } from "./app/app.component";
bootstrapApplication(AppComponent).catch((err) => console.error(err));
Similarly, the @angular/platform-browser
module allows us to mount our angular components in the browser using its bootstrapApplication()
function, which accepts our AppComponent
class. This function mounts the AppComponent
to the main/index.html
file using its app-root
selector.
If we run our application on both platforms, we should see our app display a static time and an empty image, as shown below.
Yes, this is expected because our template holds hardcoded values, and the image has an empty src
property, as shown below.
<h1>Hello World time is 2:46:48 </h1>
<img src='' alt='clock' />
Remember, apart from the rendered UI template, components also hold data. Data binding allows components to include their data in the rendered components’s UI template. Let’s now bind some data to our components.
Update the App.tsx
file to look like so:
function App() {
const imageURL =
"https://images.unsplash.com/photo-1456574808786-d2ba7a6aa654?w=800&auto=format&fit=crop&q=60&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxzZWFyY2h8NHx8Y291bnR8ZW58MHx8MHx8fDA%3D";
return (
<div>
<h1>Hello World time is {new Date().toLocaleTimeString()}</h1>
<img src={imageURL} alt="" />
</div>
);
}
Data binding in React is done between a {
and }
, which could contain any primitive type, such as a string like the JavaScript expression imageURL
above. An expression is just something that produces a value, like our new Date().toLocaleTimeString()
function call that returns a value. The curly-brace syntax {
and }
is the only method for binding data in React components.
Update the app.component.ts
file with the following:
@Component({
selector: "app-root",
template: `
<h1>Hello World time is {{ now() }}</h1>
<img [src]="" alt="clock" [srcset]="" />
`,
styles: [
/*...*/
],
standalone: true,
})
export class AppComponent {
now() {
return new Date().toLocaleTimeString();
}
imageURL =
"https://images.unsplash.com/photo-1456574808786-d2ba7a6aa654?w=800&auto=format&fit=crop&q=60&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxzZWFyY2h8NHx8Y291bnR8ZW58MHx8MHx8fDA%3D";
}
Angular provides us with four different syntaxes for data binding, which are:
{{ and }}
: This is the default interpolation syntax for embedding primitives or expressions that return them. We used it to embed the time string by invoking the now()
method we included in our AppComponent.
Note that doing something like {{new Date().toLocaleTimeString()}}
in Angular component templates is not supported. The framework allows developers to modify the {{and}}
syntax by setting an interpolation property in the options fed to @Component
. For example passing interpolation: ["|_","_|"]
will enable using |_ and _|
in our templates.[ and ]
: This is used for property binding, e.g., in the [src]
property above that references our AppComponent
, the imageURL
instance variable.( and )
: This is used for binding events, usually when we want to bind functions to components.[( and )]
: This is used for two-way data binding in Angular components, commonly used with forms.Running our app on both platforms, we get the current time printed on the screen.
At this juncture, before we explain the remaining component-related concepts of Angular and React, let’s build the counter and counter button components.
We need to create two files in our src
folder. These are Counter.tsx
and CounterButton.tsx
.
Update the Counter.tsx
file with the following:
import { CSSProperties } from "react";
export default function Counter() {
const styles: CSSProperties = {
border: "2px solid red",
};
return (
<div style={styles}>
<h2>counter value is 0</h2>
</div>
);
}
Next, update the CounterButton.tsx
file with the following:
function CounterButton() {
return (
<button onClick={() => onClick(1)} disabled={disabled}>
increment
</button>
);
}
We need to create two files in our src/App
folder. These are counter.component.ts
and counter-button.component.ts
.
Update the counter.component.ts
file to match the following:
import { Component, EventEmitter, Input, OnInit, Output } from "@angular/core";
import { CounterButton } from "./counter-button.component";
@Component({
selector: "counter-comp",
template: `
<div style="border:2px solid red">
<h2>counter value is 0</h2>
</div>
`,
styles: [],
standalone: true,
})
export class CounterComponent {
constructor() {}
}
Next, update the counter-button.component.ts
file with the following:
import { Component } from "@angular/core";
@Component({
selector: "counter-button",
template: ` <button>increment</button> `,
styles: [],
standalone: true,
})
export class CounterButton {
constructor() {}
}
We want to show how components render other components. We want to have all our components connected to end up with this structure.
Let’s do that in both applications. The idea is simple: We will connect the counter button component to the counter component and then connect the counter to the app component.
In React, Component A can be composed of Component B if A imports B and embeds it in its returned template.
Let’s now include the CounterButton
in the Counter
.
import CounterButton from "./CounterButton";
export default function Counter() {
const styles: CSSProperties = {
border: "2px solid red",
};
return (
<div style={styles}>
<h2>counter value is 0</h2>
<h1>This is a counter app</h1>
<CounterButton />
</div>
);
}
Next, let’s connect the Counter
component to the AppComponent
component.
import Counter from "./Counter";
function App() {
const imageURL =
"https://images.unsplash.com/photo-1456574808786-d2ba7a6aa654?w=800&auto=format&fit=crop&q=60&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxzZWFyY2h8NHx8Y291bnR8ZW58MHx8MHx8fDA%3D";
return (
<div>
<h1>Hello World time is {new Date().toLocaleTimeString()}</h1>
<img src={imageURL} alt="" />
<h1>This is a counter app</h1>
<Counter />
</div>
);
}
In Angular, Component A can be composed of Component B if A imports B and includes B in its imports array in its @Component()
decorator options.
Let’s now include the CounterButton
in Counter
:
import { CounterButton } from "./counter-button.component";
@Component({
selector: "counter-comp",
template: `
<div style="border:2px solid red">
<h2>counter value is 1</h2>
<counter-button />
</div>
`,
styles: [],
standalone: true,
imports: [CounterButton],
})
export class CounterComponent {
constructor() {}
}
Likewise, let’s connect the Counter
to the AppComponent
:
import { Component, OnInit } from "@angular/core";
import { CounterComponent } from "./counter.component";
@Component({
selector: "app-root",
template: `
<h1>Hello World time is {{ now() }}</h1>
<img [src]="imageURL" alt="" [srcset]="" />
<counter-comp />
`,
styles: [
`
* {
display: block;
}
`,
],
standalone: true,
imports: [CounterComponent],
})
export class AppComponent {
constructor() {}
now() {
return new Date().toLocaleTimeString();
}
imageURL =
"https://images.unsplash.com/photo-1456574808786-d2ba7a6aa654?w=800&auto=format&fit=crop&q=60&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxzZWFyY2h8NHx8Y291bnR8ZW58MHx8MHx8fDA%3D";
}
If we run both applications now, we should see the counter displayed on the browser, as shown below.
So far, our application displays static data, so let’s add some states to our app. Hold on, what is the state? State is the data held by a component that is typically subject to change over time as the user interacts with the component’s template’s UI.
Our Counter component currently displays a static value. Let’s now add some state to hold the counter value and a function to increment the counter.
React provides several functions (called hooks) for state management, such as useState()
and useReducer()
. The React ecosystem also consists of libraries such as Redux, Mobx, Preact, etc., which are used to simplify state management.
Here, we will only describe state management using useState()
. The fundamental idea about component state is similar irrespective of the option chosen.
Let’s now add state to our counter component by updating the Counter.tsx
file with the following:
import { useState } from "react";
export function Counter() {
const [count, setCount] = useState(0);
function updateCounter(val: number) {
setCount(count + val);
}
const styles: CSSProperties = {
border: "2px solid red",
};
return (
<div style={styles}>
<h2>counter value is {count}</h2>
<CounterButton />
{props.children}
</div>
);
}
Our counter value, by default, holds a value of 1. The useState()
call returns an array with two properties—the value (count
) and a setter function (setCount
).
We also define a function called updateCounter()
that accepts a number and then updates the counter’s value using the setter function. Later, when we discuss change detection in components, the explanation will be clearer on why React specifies using a setter instead of directly changing the value using something like count+= val
.
In Angular, state can be maintained using regular instance variables of your component class that can be set to primitive types, observables (using RxJs) and signals. Also, the Angular ecosystem provides developers with tools like the NgRx store (based on Redux) to further simplify the task of managing the app state at scale.
We will describe how to use the component’s instance variable for state management.
Let’s update our Counter component by updating the counter-component.ts
file.
@Component({
selector: "counter-comp",
template: `
<div style="border:2px solid red">
<h2>counter value is {{ count }}</h2>
<counter-button />
</div>
`,
styles: [],
standalone: true,
imports: [CounterButton],
})
export class CounterComponent {
constructor() {}
count = 1;
updateCounter(val: number) {
this.count += val;
}
}
We included an instance variable called count
and a function called updateCounter
that updates it.
Component communication focuses on how components interact; the interaction is typically done by passing data between components.
In React and Angular, data flow from parent to child is unidirectional, i.e., parents can only pass data to children rather than the other way around.
All React components receive data via props. A prop is just a key-value pair where the key name is a string, and the value can be a primitive or an object.
Our counter button is our concern at this point. It needs to be able to update the counter, and it also needs to be disabled when the counter value is greater than 5. We want our counter component to pass this data to this component.
Let’s configure our CounterButton component to receive some props.
interface CounterButtonProps {
handleIncrement: (val: number) => void;
disabled: boolean;
}
function CounterButton({ handleIncrement, disabled }: CounterButtonProps) {
return (
<button onClick={() => handleIncrement(1)} disabled={disabled}>
increment
</button>
);
}
The CounterButton component accepts two props: a disabled
prop to disable the button and the handleIncrement
prop, which is a function to increment the counter. Let’s pass these props from our Counter
component.
export function Counter(props: PropsWithChildren) {
const [count, setCount] = useState(0);
function updateCounter(val: number) {
setCount(count + val);
}
const styles: CSSProperties = {
border: "2px solid red",
};
return (
<div style={styles}>
<h2>counter value is {count}</h2>
{count > 5 ? <h4>count is > 5</h4> : null}
<CounterButton
handleIncrement={(val: number) => updateCounter(val)}
disabled={count > 5}
/>
{props.children}
</div>
);
}
The Counter
component feeds data with the two properties with the respective values supported by the CounterButton
.
React components can also be passed as props in React. So suppose our Counter component needs a prop named “special” which is expected to be a component. You can do something like
<CounterButton special={<SomeComponent/>}/>
.
Similarly, Angular data is passed between parent and child components as key-value pairs; however, unlike React, Angular components cannot be passed as values between parent and child components.
Let’s go the other way around. Let’s feed data from the CounterComponent
to the CounterButton
component this time.
@Component({
selector: "counter-comp",
template: `
<div style="border:2px solid red">
<h2>counter value is {{ count }}</h2>
<counter-button
(handleIncrement)="updateCounter($event)"
[disabled]="count > 5"
/>
</div>
`,
styles: [],
standalone: true,
imports: [CounterButton],
})
export class CounterComponent {
constructor() {}
count = 1;
updateCounter(val: number) {
this.count += val;
}
}
Let’s update the counter button in the counter-button.component.ts
file to receive the props.
@Component({
selector: 'counter-button',
template: `
<button (click)="incrementCounter()" [disabled]="disabled">
increment
</button>
`,
styles: [],
standalone: true,
})
export class CounterButton {
constructor() { }
@Input("disabled") disabled!: boolean;
@Output('handleIncrement')
handleIncrementEmitter = new EventEmitter();
incrementCounter() {
this.handleIncrementEmitter.emit(1);
}
}
When we start both applications, we see we can increment the counter, and the counter button gets disabled when the value exceeds 5.
When we look at both applications, we notice that we can increment both counters. Let’s look closely at both applications
In the React application, we notice that only the counter gets incremented, but the time remains the same.
But in the Angular application, each time we increment the counter, the time is also updated in the app UI, as shown below.
This leads us to the question: How do Angular and React detect that a component has changed, and how does the changed component render itself and update the UI?
In React, state change is what enables React to know when a component needs to rerender. If you look at our Counter component, we use the useState
hook to manage the value of the count whenever we call our setCount
function. React knows that we most likely have changed, and it needs to re-render our Counter component and update the app UI with the new value, but how does this work?
Remember our app component tree as shown below.
Let’s take a step back. Before our app tree (i.e., App, Counter and CounterButton) is rendered on the browser window, React represents our entire app as a tree of plain JavaScript objects (all the JSX we have included in our React app, and our App component functions are the nodes in this tree). This tree is referred to as the virtual DOM. This virtual DOM is then parsed and rendered on the browser’s DOM.
The browser’s DOM maintains references to all the functions and data that the virtual DOM knows.
As the user interacts with the app and triggers any state-changing function—e.g., when the user clicks the counter button and calls its handleIncrement()
function, which in turn calls the setCount()
function to update the counter—React is aware that something has changed in our counter.
React then creates a new version of the virtual DOM from the old one (immutability) with the count updated and then compares the old and the new virtual DOM to see what has changed using its special diffing algorithm.
React then only updates the counter and its subtree, i.e., the Counter
and CounterButton
, but not the App
component in the browser’s DOM to reflect the new change. This efficient process React uses to update only what is necessary is referred to as reconciliation.
When an Angular component is created, the framework automatically creates a change detector for it. There are two types of change detection strategies supported by Angular, and these are:
By default, all components use the default strategy, but you can pass a changeDetector
property to change this behavior.
In Angular applications, change detection is typically triggered when a DOM event is handled such as a button click, or when using the special async pipe, which may modify some internal state of the component and change the template the component renders.
Generally, when change detection occurs in Angular, these things happen:
Let’s look at our application to see what happens in this framework’s default change detection mode.
When the counter button is clicked and we call its incrementCounter
function since this function is bound to a DOM event, a click in our case, the Zone.js library wraps the incrementCounter
function. Once this function is done executing, it marks the counter button component and its ancestors as dirty, then notifies Angular that it needs to look at the App component tree.
Angular traverses this tree from top to button, but it doesn’t look at its children since it already sees that the App component is dirty. It re-renders the whole tree, and the components update to reflect the new value. This is why the time changes in the App component after incrementing the counter.
To understand the OnPush
strategy of change detection, let’s modify our app tree temporarily and include a component called OnPushComponent
. We won’t create a new file for this dummy component since it is for demonstration.
@Component({
selector: "on-push-comp",
template: ` <h1>the time is{{ now() }}</h1> `,
styles: [],
standalone: true,
changeDetection: ChangeDetectionStrategy.OnPush,
})
export class OnPushComponent {
constructor() {}
now() {
return new Date().toLocaleTimeString();
}
}
Notice that this component uses the OnPush change detection strategy. Let’s register it in our app component.
@Component({
selector: 'app-root',
// templateUrl: './x.htm',
template: `
<h1>Hello World time is {{ now() }}</h1>
<img [src]="imageURL" alt="" />
<h1>This is a counter app</h1>
<counter-comp>
<!-- <span>this is rendered in counter</span> -->
</counter-comp>
<on-push-comp />
`,
styles: [
`
* {
display: block;
}
`,
],
standalone: true,
imports: [CounterComponent, OnPushComponent],
providers: [WayooService],
changeDetection: ChangeDetectionStrategy.OnPush,
})
A component marked with the OnPush
change event is not marked as dirty if its data or children subtree does not change. It is only marked as dirty in the following scenarios:
changeRef
object retrieved via dependency injection.So, in our mini app, if the user increments the counter, Zone.js notifies Angular to check for dirty components and rerender the application. When Angular reaches our stubborn OnPush
component, it tells it that it hasn’t changed and does not need a re-render, so it is not re-rendered, as shown below.
It is important to carefully analyze the performance benefits and clearly understand the behavior of the OnPush
change detection strategy before using it so that you don’t end up with a broken application.
We want to enable our Counter component to render a piece of UI when the counter value exceeds 5. Conditional rendering allows components to render something when certain conditions are met.
Update the Counter.tsx
file to match the following:
export function Counter(props: PropsWithChildren) {
return (
<div style={styles}>
<h2>counter value is {count}</h2>
{count > 5 ? <h4>count is > 5</h4> : null}
<CounterButton
onClick={(val: number) => updateCounter(val)}
disabled={count > 5}
>
increment
</CounterButton>
{props.children}
</div>
);
}
Components can conditionally render other components using the ternary operator. For more control, you can also pass a function that returns a value using normal if-else statements.
Update the counter-component.ts
file to look like so:
@Component({
selector: 'counter-comp',
template: `
<div style="border:2px solid red">
<h2>counter value is {{ count }}</h2>
@if (count > 5) {
<h4>count is > 5</h4>
}
<counter-button
(handleIncrement)="updateCounter($event)"
[disabled]="count > 5"
>
increment
</counter-button>
<ng-content></ng-content>
</div>
`,
styles: [],
standalone: true,
imports: [CounterButton],
})
We can conditionally render components using the @if
syntax.
React provides us with custom hooks. Let’s update the Counter.tsx
file to include a custom hook called useCounter
.
function useCounter() {
const [count, setCount] = useState(0);
function updateCounter(val: number) {
setCount(count + val);
}
return { updateCounter, count };
}
export default useCounter;
You can put the useCounter
hook in a separate file, but we are fine with our setup in this case.
Not all React components return UI. Here, we created a hook, which is just a function that, in our case, uses the useState
hook to maintain the value of the counter and a function to update it.
Let’s include this hook in our counter component, as shown below.
export function Counter(props: PropsWithChildren) {
const { updateCounter, count } = useCounter();
const styles: CSSProperties = {
border: "2px solid red",
};
return (
<div style={styles}>
<h2>counter value is {count}</h2>
{count > 5 ? <h4>count is > 5</h4> : null}
<CounterButton
onClick={(val: number) => updateCounter(val)}
disabled={count > 5}
>
increment
</CounterButton>
{props.children}
</div>
);
}
Angular allows components to share logic via services. Let’s create a service called CounterService
in a file called counter.service.ts
import { Injectable } from "@angular/core";
@Injectable({
providedIn: "root",
})
export class CounterService {
count = 1;
x = 1;
updateCounter(val: number) {
this.count += val;
}
}
Services are classes decorated with the @Injectable
decorator. To use this service, we need to register it. Depending on our application needs, we can register a service in a component high in our component tree. This will make the component accessible to other child components.
In our case, since it’s only our counter that needs this service, let’s register it there. Update the counter-component.ts
file as shown below:
import {
Component,
EventEmitter,
Input,
OnInit,
Output,
inject,
} from "@angular/core";
import { CounterButton } from "./counter-button.component";
import { CounterService } from "./counter.service";
@Component({
selector: "counter-comp",
template: `
<div style="border:2px solid red">
<h2>counter value is {{ counterService.count }}</h2>
@if (counterService.count > 5) {
<h4>count is > 5</h4>
}
<counter-button
(handleIncrement)="counterService.updateCounter($event)"
[disabled]="counterService.count > 5"
>
increment
</counter-button>
<ng-content></ng-content>
</div>
`,
styles: [],
standalone: true,
imports: [CounterButton],
providers: [CounterService],
})
export class CounterComponent {
counterService: CounterService = inject(CounterService);
}
The counter service is registered to provide our counter service via its providers
array; it is then accessed in the counter
class using dependency injection using the inject
function.
Components also have a lifecycle that allows developers to perform actions on them when the component is created, updated or destroyed. It allows the user to write some logic at each point.
React provides developers with several hooks to write some code in the component lifetime, such as the following:
useLayoutEffect()
: This allows you to write some code before a component is rendered to the screen.useEffect()
: This hook allows you to write some code when the component mounts, properties change and the component gets deleted. Typically, this is where data gets fetched from the server for a component and where you write some clean-up code when the component gets destroyed. Let’s write some code to print a message to the screen each time our counter value changes.import { CSSProperties, PropsWithChildren, useEffect, useState } from "react";
export function Counter(props: PropsWithChildren) {
const { updateCounter, count } = useCounter();
useEffect(() => {
console.log("count changed");
return () => console.log(" this function runs the cleanup");
}, [count]);
const styles: CSSProperties = {
border: "2px solid red",
};
return (
<div style={styles}>
<h2>counter value is {count}</h2>
{count > 5 ? <h4>count is > 5</h4> : null}
<CounterButton
onClick={(val: number) => updateCounter(val)}
disabled={count > 5}
>
increment
</CounterButton>
{props.children}
</div>
);
}
At the simplest level, to write some code when a component gets created, we can include some code in the component’s constructor. However, Angular also provides developers with a long list of methods to add to their components when the component is initialized (data-fetching operations are typically done here), when its view and all its children’s views are rendered, when inputs change, etc. You can learn more about this here.
Talking about modern frontend frameworks without talking about components is almost impossible. This guide allows developers to see hands-on component architecture in Angular and React; hopefully, this knowledge can help new and seasoned developers see the need to explore other frameworks and use them in their future work despite the nuances, similarities and differences between them.
Continue exploring Angular or React topics in our Basics series.]]>
Sometimes when you write a program, you need to wait a few seconds before doing anything. This is certainly the case when fetching data, re-fetching data and updating data. You may have 50 triggers at once, when all you need is one.
Debouncing is a way of skipping all this input garbage and waiting for things to calm. Then, and only then, will the expected function run.
The best example and way to show debouncing is with a timer. Let’s say we want to run the function getData()
at the appropriate time.
First we need to wait a good time, say 5 seconds, before running the function.
setTimeout(() => console.log('5 seconds'), 5000);
This will make sure nothing gets run until 5 seconds pass. This is great, but we may need to cancel this.
const timeout = setTimeout(() => console.log('5 seconds'), 5000);
clearTimeout(timeout);
If we run this, it will allow us to clear the timeout. This means we are canceling running the function inside setTimeout
However, since we don’t have any mechanisms to continue, it will always get cleared.
Here we put the timeout inside of a function. If the timeout exists, it will get cleared out. Notice we have to declare the timeout
variable outside of the run
function in order for it to persist. Otherwise it would just declare a new timeout every time and would never get canceled.
let timeout: NodeJS.Timeout;
const run = () => {
clearTimeout(timeout);
timeout = setTimeout(() => console.log('5 seconds'), 5000);
};
run();
run();
run();
Since run
is ran three times in a row, the timeout does not have time to complete. It will get run before 5 seconds has elapsed. The first two run
functions get canceled, and the third one runs as expected. This is what prevents extraneous calls. This is called debouncing.
If we are using this one time in our program and we never thing we will ever use this function again (highly unlikely for any web programmer), then we could just stop here:
let timeout: NodeJS.Timeout;
const debounce = () => {
clearTimeout(timeout);
timeout = setTimeout(() => getData(), 5000);
};
debounce();
debounce();
debounce();
However, we would be breaking Single Responsibility Principal
, where we could easily make this a module and reusable. I have found after more than 20 years of programming, this one principal saves me the most time.
The problem with creating a reusable module is the timeout variable. We need to make sure it gets reused. We also don’t want to re-pass the timeout and function.
DON’T DO THIS!!!
let timeout: NodeJS.Timeout;
const getData = () => {
console.log('getting data...');
};
const debounce = (func: () => void, waitfor: number) => {
clearTimeout(timeout);
timeout = setTimeout(() => func(), waitfor);
};
debounce(getData, 5000);
debounce(getData, 5000);
debounce(getData, 5000);
So we need a function we can import, set up our variables, then call whenever and as many times as we like:
useDebounce.ts
export function useDebounce<F extends (...args: Parameters<F>) => ReturnType<F>>(
func: F,
waitFor: number,
): (...args: Parameters<F>) => void {
let timeout: NodeJS.Timeout;
return (...args: Parameters<F>): void => {
clearTimeout(timeout);
timeout = setTimeout(() => func(...args), waitFor);
};
}
We can import this function anywhere we like, and reuse it in any framework!
import { useDebounce } from 'use-debounce';
const getData = () => {
console.log('getting data...');
};
const debounce = useDebounce(getData, 5000);
debounce();
debounce();
debounce();
As you can see, we get reusable code with the same results. No need to keep track of timers or repass our data.
One usage maybe for autosaving drafts:
const saveDraft = () => {
// save draft
};
const debounce = useDebounce(saveDraft, 5000);
...
<input type="text" oninput="debounce()" />
You may see onchange
or onkeyup
as well here. This would be the same pattern for:
This generally would be a good idea for anything you have to calculate as well. If you have a game, you may not want to recalculate the position until a player gets done moving. You will see many examples with mouse movements. There are endless use cases.
Getting debounce to work is not always as straightforward as it seems. You must declare a function that returns a function. Also, getting types in TypeScript to align with ESLint may not be as evident in other versions. I tried to give a version that can work anywhere.
Debouncing is a necessity for any intermediate or above programmer. Use and reuse this function, and you will never have to waste time again thinking about it.
]]>Vue 3 is leaps ahead of its predecessor in ease of use and compatibility with TypeScript. However, sometimes the information on the key things to know and get started can be a little hard to digest.
In this article, I aim to explain and list the concepts that I feel are the most common and relevant to keep in mind when starting your Vue 3 + TS journey. Please note that this is not a TypeScript tutorial and I will assume basic knowledge of the language.
TypeScript tooling and set up is quite a rabbit hole and one could probably write not one, but several in-depth articles about how to set up a project for the perfect integration between the two.
However, I think there is one important gotcha that I want to mention before we get going with actual TS and that is Volar’s takeover mode.
I assume you will be using Volar and VS Code here. If you are using different tooling, you can safely skip to the next section.
Save yourself some headaches now and go through the link above for the documentation on how to set up takeover mode—it’s not obvious that one has to set up VS Code like this for Volar to work properly with TypeScript.
Once you’re done, also make sure to check that the TypeScript version that Volar is using is the same as your package. This mismatch can quickly create some headaches when running type checks in CI or outside of your personal dev environment.
You’ll want to select “Use Workspace Version” unless you are 100% sure you want a different version powering Volar than what your workspace is using.
When working with components created with the Options API, you will want to import and use the defineComponent
helper from Vue to make sure that the component is correctly typed when imported into other files.
<script lang="ts">
import { defineComponent } from 'vue'
export default defineComponent({
})
</script>
You don’t need this when using script setup
sugar, as it will already be typed correctly for you.
Correctly typing your component props is arguably one of the most important parts of using TS with Vue 3. This will ensure type safety even within your template
tags, so how do we go about typing things?
<script lang="ts">
import { defineComponent } from 'vue'
export default defineComponent({
props: {
someNumber: { type: Number, default: 0 },
user: { type: Object, default: 0 }
}
})
</script>
<script setup lang="ts">
defineProps({
someNumber: { type: Number, default: 0 },
user: { type: Object, default: 0 }
})
</script>
In the above example, we have both Options API and Composition API defining the exact same props without any special kind of TS types. They both create a someNumber
prop which is typed to a Number
and a user
typed to a generic Object
.
When dealing with primitives like number
, boolean
, string
, we don’t really need to do anything to tell TypeScript the type of the prop as it will be inferred by the type
that Vue provides, like in the case of someNumber
.
However, we probably want to use a more specific type for our user prop.
I will assume we have the following type defined:
interface User {
id: number
name: string
}
Now, we can use the special type PropType
that Vue provides to more specifically define what that Object
in user
is.
<script lang="ts">
import { defineComponent, PropType } from 'vue'
export default defineComponent({
props: {
someNumber: { type: Number, default: 0 },
user: { type: Object as PropType<User>, default: 0 }
}
})
</script>
<script setup lang="ts">
import { PropType } from 'vue'
defineProps({
someNumber: { type: Number, default: 0 },
user: { type: Object as PropType<User>, default: 0 }
})
</script>
Now that we have props correctly typed, we want to make sure that our emits
are also strictly typed so that Volar and TypeScript can help us check that we are using emit payloads correctly.
<script lang="ts">
import { defineComponent } from 'vue'
export default defineComponent({
emits: ['userSelected', 'click']
})
</script>
<script setup lang="ts">
defineEmits(['userSelected', 'click'])
</script>
In the above example, we see again both Options API and Composition API examples of how to define emits
in a component. We are choosing to use the shorthand version of emits (there is a longer version that allows for validating the output similar to how props work).
The problem with leaving it like this when using TypeScript is that the content of the emit
will be assumed to be any
, which is not very helpful. Let’s go ahead and type these emits correctly.
We will assume that the click
event is not emitting a payload, and the userSelected
will pass a user object.
<script lang="ts">
import { defineComponent } from 'vue'
export default defineComponent({
emits: {
// eslint-disable-next-line
'userSelected': (user: User) => true,
// eslint-disable-next-line
'click': () => true
}
})
</script>
<script setup lang="ts">
defineEmits<{
'userSelected': [user: User]
'click': []
}>()
</script>
There’s a couple things worth unwrapping in this example. Let’s start with the Options API.
The emits
property is now an object, instead of an array of strings. This is the extended way of defining emits, where each emit declares a function that works as a validation, similar to the validator
property in props
. I’ve opted to return true
because we don’t need to validate the emit, since TypeScript is probably validation enough—this is up to you entirely.
The parameter found inside the validation function will be assumed by TypeScript to be the payload of the emit
, so now when we have a component listening for the userSelected
event, TypeScript will know we are expecting a User
.
Note that I’ve added a disable on ESLint on both lines. Depending on your ESLint rules, this may not be necessary, but with the default Vue recommended rules you will get an error on the function since the user
param is not being used. We don’t need it for click
technically, but it’s become a habit of mine to add it everywhere to avoid headaches.
The Composition API version is also a bit different. Notice that the actual declaration of the emits is no longer a param inside the defineEmits
function, but rather a type variable. So now it lives within the <{}>
before the ()
.
We define our emits by setting a property like userSelected
and then with a tuple syntax we determine the payload. So that [user: User]
means we will have a user
payload with a User
type. Once again this will allow TypeScript to determine the payload that we may expect in another component when listening to this particular event.
A few types that you are going to be using a lot (you import these out of 'vue'
) are:
const user = ref({ id: 123, name: 'Marina' })
In cases like this, you may want to strictly type your ref, so that instead of TS assuming this is an object with an id and name, it’s a User type.
const user: Ref<User> = ref({ id: 123, name: 'Marina' })
Note that Ref
can also accept ComputedRef
types, such that if you have a function where you would expect either a ref or a computed value you can safely use Ref
.
const myFn = (param: Ref) => {
return param.value // .value is defined as it exists in both computed and ref
}
unref
a lot since we don’t know if our user is going to pass in a raw value or a ref. In these cases we can use the MaybeRef
type.
export default (val: MaybeRef<boolean>) => {
const rawBool = unref(val)
}
ref
, you will want to use either ComponentPublicInstance
to type it as any generic component
.
<template>
<p ref="myP">Example</p>
<SomeComponent ref="myComp" />
</template>
<script setup lang="ts">
import { Ref } from 'vue'
const myP: Ref<HTMLElement|null> = ref(null)
const myComp: Ref<ComponentPublicInstance|null> = ref(null)
const doSomething = () => {
// TS knows about $el because its a ComponentPublicInstance
myComp.value?.$el
}
</script>
If you need to access specific methods within the component ref, however, you will have to use InstanceType
to define it instead of ComponentPublicInstance
.
<template>
<p ref="myP">Example</p>
<SomeComponent ref="myComp" />
</template>
<script setup lang="ts">
import { Ref } from 'vue'
const myP: Ref<HTMLElement|null> = ref(null)
const myComp: Ref<InstanceType<typeof SomeComponent>|null> = ref(null)
const doSomething = () => {
// TS knows about myMethod inside of `SomeComponent`
myComp.value?.myMethod()
}
</script>
When using global components that are not specifically imported into our components, we need to declare them somewhere for TypeScript to work.
Within global.d.ts
you can add the following declaration with your global components as you need. I’ve added an example with RouterLink
and RouterView
from Vue Router as they are commonly used as global components.
import BaseCheckbox from './globals/BaseCheckbox.vue'
declare module '@vue/runtime-core' {
export interface GlobalComponents {
RouterLink: typeof import('vue-router')['RouterLink']
RouterView: typeof import('vue-router')['RouterView']
// Custom components example
BaseCheckbox: typeof BaseCheckbox
}
}
This is really only scratching the surface of TS integration with Vue 3, but I hope that these must-know key topics help you get started quickly and effectively in your TS-Vue 3 journey.
]]>In a previous article in the Blazor Basics series, we learned how to create HTML forms and capture user data. We also learned how to implement basic form data validation with Blazor using .NET data annotations.
In this article, we will explore more advanced form validation techniques.
You can access the code used in this example on GitHub, or recreate it following the code snippets throughout this article.
We use the same user form used previously within this Blazor Basic series. It contains a username, a password and a password confirmation field.
We learned about the built-in EditForm
component we can use to create forms and handle form submission and form validation. Behind the scenes, the EditForm
component initializes and uses an EditContext
. The context contains information shared with input fields.
The built-in InputText
(and similar types) components access contextual information, such as the data object provided to the EditForm
component’s Model
property.
Using the EditForm
component, we get a simple component structure and a lot of built-in default behavior. However, when we want to get more granular control over the form, we can manually create the EditContext
and provide it to the EditForm
component.
Let’s take a look at the following example:
<EditForm EditContext="@EditContext" OnValidSubmit="@Submit">
@* Input fields omitted *@
</EditForm>
@code {
public User? UserModel { get; set; }
public EditContext EditContext { get; set; }
protected override void OnInitialized()
{
UserModel = new User();
EditContext = new EditContext(UserModel);
}
}
Similar to using the EditForm
component and providing an object to its Model
property, we can instead provide an object to its EditContext
property.
We create the EditContext
within the OnInitialized
lifecycle method and provide the UserModel
as its sole constructor argument.
So far, the form behaves the same as if we directly provided the Model
property. However, we now have a reference to the EditContext
object.
We can now enable form validation using data annotations using the EditContext
property instead of providing a child component:
EditContext.EnableDataAnnotationsValidation();
Or we can still use the DataAnnotationsValidator
component as a child component of the EditForm
component.
We now want to implement a custom validation for the password confirmation field.
Currently, when a user inputs values into the password and the password confirmation field, the validation based on data annotations will be triggered.
Take a look at the User
class attributed with the data annotations.
public class User
{
[Required]
[SupportedUsername]
[StringLength(16, MinimumLength = 4, ErrorMessage = "The username must be between 4 and 16 characters.")]
public string? Username { get; set; }
[Required]
[StringLength(24, MinimumLength = 10, ErrorMessage = "The password must be between 10 and 24 characters.")]
public string? Password { get; set; }
[Required]
[StringLength(24, MinimumLength = 10, ErrorMessage = "The password must be between 10 and 24 characters.")]
public string? PasswordConfirmation { get; set; }
}
Additionally, to validate the password length, we want to ensure that the password is identical to the value entered in the password confirmation field.
We add an instance of the ValidationMessageStore
type to the form component. It will hold the validation messages to display them on the screen.
@code {
public User? UserModel { get; set; }
public EditContext? EditContext { get; set; }
public ValidationMessageStore? MessageStore;
protected override void OnInitialized()
{
UserModel = new User();
EditContext = new EditContext(UserModel);
MessageStore = new ValidationMessageStore(EditContext);
EditContext.OnValidationRequested += HandleValidationRequested;
}
}
Like any other Blazor form, we first initialize an instance of the model class. Next, we create the EditContext
object and provide the data model as its class.
We now create an instance of the ValidationMessageStore
type and provide the EditContext
as its argument.
We can now add a validation method to the OnValidationRequested
event property of the EditContext
variable. We add the HandleValidationRequested
method that we will implement next.
As stated above, we want to add a custom rule, accessing the two password fields and comparing their values to see if they are equal.
private void HandleValidationRequested(object? sender, ValidationRequestedEventArgs args)
{
MessageStore?.Clear();
if (UserModel?.PasswordConfirmation != UserModel?.Password)
{
MessageStore?.Add(() => UserModel.PasswordConfirmation, "Passwords do not match.");
EditContext?.NotifyValidationStateChanged();
}
}
First of all, we clear the validation messages within the MessageStore
. It makes sure that whenever the validation is triggered, the old validation messages are removed from the MessageStore
before the validation is executed.
Next, we add the custom validation rule as an if
statement. We access the PasswordConfirmation
and Password
properties on the UserModel
property. It contains the values entered into
the form field.
If the value of the password confirmation field doesn’t equal the value within the password field, we add a message to the MessageStore
. The first parameter references the field on the UserModel
object. The second parameter
contains the message that will be shown to the user.
We also need to call the NotifyValidationStateChanged
method on the EditContext
to let it know that there are new validation messages.
When we start the application and enter different values into the password and the password confirmation fields, we see the “Passwords do not match” text above the form.
It is rendered by the ValidationSummary
component that we still use as a child component of the EditForm
component.
Currently, the custom validation method is called when the user submits the form. If you want to validate whenever a field is changed, you can use the OnFieldChanged
event instead of the OnValidationRequested
event on the EditContext
instance.
Be aware that when you use the OnFieldChanged
event, you also need to make sure you only validate the password confirmation field when the field contains a value. Otherwise, the validation message will be shown as soon as the user enters
a value in the regular password field.
You could use the following code:
private void HandleFieldChanged(object? sender, FieldChangedEventArgs args)
{
MessageStore?.Clear();
if (UserModel?.PasswordConfirmation != UserModel?.Password &&
UserModel?.PasswordConfirmation?.Length > 0)
{
MessageStore?.Add(() => UserModel.PasswordConfirmation, "Passwords do not match.");
EditContext?.NotifyValidationStateChanged();
}
}
Notice the additional part in the if
statement that checks whether the length of the PasswordConfirmation
property is greater than 0.
As stated above, you can register the HandleFieldChanged
method to the EditContext
by adding the following line in the OnInitialized
method:
EditContext.OnFieldChanged += HandleFieldChanged;
The default behavior of the input fields generated using the built-in Input* components provides an error indicator using a red border.
If we want to change the styling of our input fields, we can use the FieldCssClassProvider
as the base class for our implementation.
Considering the following code:
public class CustomFieldClassProvider : FieldCssClassProvider
{
public override string GetFieldCssClass(EditContext editContext, in FieldIdentifier fieldIdentifier)
{
var isValid = !editContext.GetValidationMessages(fieldIdentifier).Any();
return isValid ? "valid-field" : "invalid-field";
}
}
We create a new CustomFieldClassProvider
class inheriting from the built-in FieldCssClassProvider
class. We override the GetFieldCssClass
method to implement custom code.
In this example, we check the EditContext
for existing validation messages. We then use the result to decide whether to add the valid-field or the invalid-field CSS class to the input component.
Next, we define the CSS classes in the site.css
file within the wwwroot/css
folder of the project.
.valid-field {
border: 5px dotted yellow;
}
.invalid-field {
border: 5px dashed orange;
}
We define a yellow dotted border with a width of 5px as the valid-field CSS class. We also create an invalid-field CSS class that uses a dashed orange border with the same width.
As the final step, we need to assign the CustomFieldClassProvider
class to the EditContext
. The following code allows us to set a custom field class provider to an EditContext
.
EditContext.SetFieldCssClassProvider(new CustomFieldClassProvider());
We add this line at the end of the OnInitialized
method in our Blazor component.
When we start the application, we can see the custom CSS classes in action.
Sometimes, we want the Submit button only to be active when the form is in a valid state.
When we have access to the EditContext
, we can implement this behavior directly inside the template code where we define the submit button.
<button type="submit" disabled="@(!EditContext?.Validate())">Register</button>
We use the default HTML disabled attribute and the Validate
method on the EditContext
to control the behavior.
When we start the application, we can see that the submit button is disabled unless all form fields are valid. As soon as all input fields contain a valid value, the submit button is enabled.
Besides the examples discussed in this article, Blazor also supports nested models, collection types and complex types as a model for the EditForm
component.
However, the built-in DataAnnotationsValidator
component only validates top-level objects that aren’t collections or complex-types. For those types, we need to use the ObjectGraphDataAnnotationsValidator
.
At the time of writing this article, the ObjectGraphDataAnnotationsValidator
is part of the experimental NuGet package Microsoft.AspNetCore.Components.DataAnnotations.Validation.
If you want to learn more about validating forms using complex-type properties or collections, I suggest looking into this NuGet package.
In this article, we learned how to manually create and use the EditContext
type. It provides us with more granular control over how the HTML form is generated.
We also learned how to implement a custom validation rule that uses the values of multiple form fields to build a validation rule. We registered the validation rule on the EditContext
.
By default, a Blazor form created by using the EditForm
component validates when the user presses the submit button. However, we learned how to change the behavior to validate when the user changes a field by registering an event callback
method on the OnFieldChanged
event on the EditContext
.
We also learned that we can implement code to influence the CSS class added to the input fields by implementing a class provided to the SetFieldCssClassProvider
property of the EditContext
.
Last but not least, we learned how to utilize the EditContext
to enable/disable the submit button based on whether the form is valid.
You can access the code used in this example on GitHub.
If you want to learn more about Blazor development, you can watch my free Blazor Crash Course on YouTube. And stay tuned to the Telerik blog for more Blazor Basics.
]]>In the evolving world of web applications, real-time functionality has become a pivotal feature, enabling interactive and dynamic user experiences.
Whether it’s live chats, notifications or collaboration tools, having instant feedback is critical for user experience. Great examples are chat applications where users can see each others’ messages instantly or editor tools, such as Figma or Google Docs, that allow many users to collaborate together in real-time.
All of this is made possible by real-time technologies, such as WebSockets. In this article, we will take advantage of WebSockets and build a real-time application using React on the client side and Fastify with Node.js on the server-side.
WebSocket is a powerful communication protocol that enables two-way, full-duplex communication between a client and a server over a single, long-lived connection. Unlike traditional HTTP requests, which are stateless and involve opening a new connection for each request, WebSockets maintain a persistent connection.
You can find the full code example for this tutorial in the GitHub repository.
Let’s start by creating a client-side React app with Vite and server-side project with Fastify.
npm create vite@latest client -- --template react
cd client
npm install
npm run dev
A newly created Vite app runs on port 5173, so visit http://localhost:5173
in your browser to access it.
After the Vite project is created, we need to create the server side.
mkdir server
cd server
npm init -y
npm install fastify
server/index.mjs
import fastify from "fastify";
const app = fastify({
logger: true,
});
app.get("/", async (request, reply) => {
return { hello: "world" };
});
try {
await app.listen({ port: 3000 });
app.log.info(`Server is running on port ${3000}`);
} catch (error) {
app.log.error(error);
process.exit(1);
}
"dev"
script to run the server.server/package.json
{
"name": "server",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"dev": "node --watch index.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"fastify": "^4.24.3"
}
}
Note that the node --watch
command is only available since Node 18. If you’re using an older version, you can use Nodemon instead.
After running the npm run dev
command, the Fastify server should start on port 3000
. After visiting http://localhost:3000
, you should see the following response in the browser.
There are multiple ways of implementing WebSockets on the server and client side. For example, we could use libraries, such as ws and socket.io. However, Fastify has a core library called @fastify/websocket that provides WebSocket functionality and integrates well with the Fastify framework. Therefore, if you’re using Fastify in your project, consider using the @fastify/websocket
library. Otherwise, you can use other solutions.
Let’s install @fastify/websocket
and @fastify/cors
in the server
directory.
npm install @fastify/websocket @fastify/cors
If your project uses TypeScript, make sure to also install types.
npm i @types/ws -D
Next, we need to register the @fastify/websocket
plugin to start listening for messages and the @fastify/cors
plugin to allow connections from other ports. We need to do this because the React app runs on http://localhost:5137
, while the Fastify app will run on http://localhost:3000
.
server/index.mjs
import Fastify from "fastify";
import fastifyWebSockets from "@fastify/websocket";
import cors from "@fastify/cors";
const fastify = Fastify({
logger: true,
});
/**
* Register cors to allow all connections. Note that in production environments, you should
* narrow down domains that should be able to access your server.
*/
fastify.register(cors);
/**
* Register the Fastify WebSockets plugin.
*/
fastify.register(fastifyWebSockets);
/**
* Register a new handler to listen for WebSocket messages.
*/
fastify.register(async function (fastify) {
fastify.get(
"/online-status",
{
websocket: true,
},
(connection, req) => {
connection.socket.on("message", msg => {
connection.socket.send(`Hello from Fastify. Your message is ${msg}`);
});
}
);
});
fastify.get("/", async (request, reply) => {
return { hello: "world" };
});
try {
await fastify.listen({ port: 3000 });
fastify.log.info(`Server is running on port ${3000}`);
} catch (error) {
fastify.log.error(error);
process.exit(1);
}
Fastify will forward all WebSocket connections to the /online-status
endpoint. When a new message is received, a response is sent immediately.
connection.socket.send(`Hello from Fastify. Your message is ${msg}`);
Next, let’s modify our React app to send and receive messages from the server.
client/src/App.jsx
import { useEffect } from "react";
import "./App.css";
/**
* Establish a new WebSocket connection.
*/
const ws = new WebSocket(`ws://localhost:3000/online-status`);
/**
* When a WebSocket connection is open, inform the server that a new user is online.
*/
ws.onopen = function () {
ws.send("hello from react");
};
function App() {
useEffect(() => {
/**
* Listen to messages and change the users' online count.
*/
ws.onmessage = message => {
console.log("message from server:", message.data);
};
}, []);
return <div></div>;
}
export default App;
We establish a new WebSocket connection and send the “hello from react” message when the connection is opened.
Now we have a working WebSocket connection. Let’s modify the client-side further to display the count of all online users sent from the server. Moreover, we can add a select to allow users to change their online status.
client/src/App.jsx
import { useEffect, useState } from "react";
import "./App.css";
/**
* Get a random user ID. This is fine for this example, but for production, use libraries like paralleldrive/cuid2 or uuid to generate unique IDs.
*/
const userId = localStorage.getItem("userId") || Math.random();
localStorage.setItem("userId", userId);
/**
* Establish a new WebSocket connection.
*/
const ws = new WebSocket(`ws://localhost:3000/online-status`);
/**
* When a WebSocket connection is open, inform the server that a new user is online.
*/
ws.onopen = function () {
ws.send(
JSON.stringify({
onlineStatus: true,
userId,
})
);
};
function App() {
/**
* Store the count of all users online.
*/
const [usersOnlineCount, setUsersOnlineCount] = useState(0);
/**
* Store the selected online status value.
*/
const [onlineStatus, setOnlineStatus] = useState();
useEffect(() => {
/**
* Listen to messages and change the users online count.
*/
ws.onmessage = message => {
const data = JSON.parse(message.data);
setUsersOnlineCount(data.onlineUsersCount);
};
}, []);
const onOnlineStatusChange = e => {
setOnlineStatus(e.target.value);
if (!e.target.value) {
return;
}
const isOnline = e.target.value === "online";
ws.send(
JSON.stringify({
onlineStatus: isOnline,
userId,
})
);
};
return (
<div>
<div>Users Online Count - {usersOnlineCount}</div>
<div>My Status</div>
<select value={onlineStatus} onChange={onOnlineStatusChange}>
<option value="">Select Online Status</option>
<option value="online">Online</option>
<option value="offline">Offline</option>
</select>
</div>
);
}
export default App;
Let’s digest the code step by step. At first, we create a random ID for the user and save it in the local storage so it’s not recreated on every page reload.
const userId = localStorage.getItem("userId") || Math.random();
localStorage.setItem("userId", userId);
Further, when a WebSocket connection is opened, the server is notified that a new user has visited the page.
/**
* When a WebSocket connection is open, inform the server that a new user is online.
*/
ws.onopen = function () {
ws.send(
JSON.stringify({
onlineStatus: true,
userId,
})
);
};
After receiving this message, the server will broadcast a message to all subscribed clients that the online users status has changed. We will implement it in a moment.
We have two states. The first one, usersOnlineCount
, will store the count of all online users. This information will be sent from the server. The second state stores the information about the user’s selected online status.
/**
* Store the count of all users online.
*/
const [usersOnlineCount, setUsersOnlineCount] = useState(0);
/**
* Store the selected online status value.
*/
const [onlineStatus, setOnlineStatus] = useState();
With useEffect
, we listen for new messages and update the users online state accordingly.
useEffect(() => {
/**
* Listen to messages and change the users online count.
*/
ws.onmessage = message => {
const data = JSON.parse(message.data);
setUsersOnlineCount(data.onlineUsersCount);
};
}, []);
Finally, the onOnlineStatusChange
status method keeps the state in sync with the select
element and notifies the server when the user’s status is changed.
const onOnlineStatusChange = e => {
setOnlineStatus(e.target.value);
if (!e.target.value) {
return;
}
const isOnline = e.target.value === "online";
ws.send(
JSON.stringify({
onlineStatus: isOnline,
userId,
})
);
};
Let’s update the server so it stores online users and updates the count whenever the online users status is changed.
server/index.mjs
import Fastify from "fastify";
import fastifyWebSockets from "@fastify/websocket";
import cors from "@fastify/cors";
const fastify = Fastify({
logger: true,
});
/**
* Register cors to allow all connections. Note that in production environments, you should
* narrow down domains that should be able to access your server.
*/
fastify.register(cors);
/**
* Register the Fastify WebSockets plugin.
*/
fastify.register(fastifyWebSockets);
const usersOnline = new Set();
/**
* Register a new handler to listen for WebSocket messages.
*/
fastify.register(async function (fastify) {
fastify.get(
"/online-status",
{
websocket: true,
},
(connection, req) => {
connection.socket.on("message", msg => {
const data = JSON.parse(msg.toString());
if (
typeof data === "object" &&
"onlineStatus" in data &&
"userId" in data
) {
// If the user is not registered as logged in yet, we add this user's id.
if (data.onlineStatus && !usersOnline.has(data.userId)) {
usersOnline.add(data.userId);
} else if (!data.onlineStatus && usersOnline.has(data.userId)) {
usersOnline.delete(data.userId);
}
/**
* Broadcast the change in online users status to all subscribers.
*/
fastify.websocketServer.clients.forEach(client => {
if (client.readyState === 1) {
client.send(
JSON.stringify({
onlineUsersCount: usersOnline.size,
})
);
}
});
}
});
}
);
});
fastify.get("/", async (request, reply) => {
return { hello: "world" };
});
try {
await fastify.listen({ port: 3000 });
fastify.log.info(`Server is running on port ${3000}`);
} catch (error) {
fastify.log.error(error);
process.exit(1);
}
On line 20, we have the usersOnline
set that stores the count of currently online users. In a real app, this information could be handled using a solution like Redis, but for this example the above implementation will suffice.
After a user is connected, we listen for messages using connection.socket.on("message", msg => {})
. In the on message
handler, we check if the msg
value received from the client is an object with onlineStatus
and userId
properties. If it is, we check if a user’s status is online or offline. Based on the status, we either add or remove the user’s id from the usersOnline
set.
if (data.onlineStatus && !usersOnline.has(data.userId)) {
usersOnline.add(data.userId);
} else if (!data.onlineStatus && usersOnline.has(data.userId)) {
usersOnline.delete(data.userId);
}
Finally, the users online status change is broadcast to all subscribed clients.
fastify.websocketServer.clients.forEach(client => {
if (client.readyState === 1) {
client.send(
JSON.stringify({
onlineUsersCount: usersOnline.size,
})
);
}
});
That’s it. We have just implemented an app with a real-time functionality. Whenever a new user visits the page, all users who are currently online will be notified about the online status change, as shown in the video below.
In this video, the same app is visited using different browsers to simulate different users. Whenever a new page is opened, the users count is updated immediately in other browsers. It also changes when the online status is changed using the user status select functionality.
We can use a tool like Progress Telerik Fiddler Everywhere to check if the WebSockets were set up correctly and what messages are sent between a client and server. Fiddler Everywhere can be used as a local proxy to intercept and spy on http and web socket requests.
The GIF above shows how to capture traffic to the http://localhost:3000/online-status
endpoint. As we change the online status, Fiddler records the messages sent between the clients and the server. For instance, we can see client messages that are sent when the user changes their online status, as well as messages from the server, which comprise the new online user count. Fiddler Everywhere can show various information about the messages, such as their size, content, when they were sent, who was the sender and more.
If you would like to learn more about how to use Fiddler Everywhere to inspect WebSocket connections and more, check out the documentation.
In this article, we have covered how to build a real-time application using WebSockets, React and Fastify. WebSockets are a great tool for implementing real-time communication. This tutorial should give you an understanding of how to add real-time functionality to your own applications.
Keep in mind the example in this tutorial is very simplified, as its purpose is to showcase how to use WebSockets. A real online status tracking functionality should also have some way of detecting if a user was idle for a specific period of time and then change their status to offline automatically.
]]>Incremental Static Regeneration (ISR) is an incredible tool for caching your data on a CDN. The technology was released in 2020 by Vercel and built specifically for Next.js. However, since then it has begun adapting for different frameworks on different hosting environments. It is quite useful and could save you thousands in hosting and database costs.
Caching has existed for a long time on server environments. With Cache-Control
headers, you have the ability to save your state for a selected amount of time. All you do is return the proper header in your designated language (PHP, Node.js, etc.).
Cache-Control: max-age=<seconds>
or if you’re using a proxy:
Cache-Control: s-maxage=<seconds>
The data is fetched once, then saved to the server cache. When the cache expires, it will fetch again. You can also display the last cache and fetch in the background when the cache expires. This prevents the first user after the cache expires from having to wait for the content to regenerate. You can specify the time to allow the old cache to display.
Cache-Control: stale-while-revalidate=<seconds>
There are many other cache-control headers available.
Caching is extremely useful, but ultimately it’s are not as robust as CDN caching. The caches are not shared across regions, and you can only revalidate the cache when it expires. There are other differences from ISR to note as well. That being said, if you don’t care about revalidation, cache-control
is a great tool.
Before the term “on-demand revalidation” was coined, CDNs revalidated a cache by purging it. Usually you had to do this through the CDN dashboard, but you could also use headers.
Not all CDNs can be purged by a header request, but this definitely paved the way for what Vercel is doing.
Vercel, which uses Cloudflare under the hood, decided to create its own caching technique called Incremental Static Regeneration. Basically, it caches for a certain amount of seconds, and then it revalidates in the background incrementally using a stale while revalidate technique.
You can do this without a framework with the Build Output API. The docs are not clear on using it without a framework, but you can see the example repo. ISR functions on both serverless functions and Edge Functions on the Edge Network.
What really makes ISR incredible is the ability to revalidate the cache on-demand. This means you could update the database, then revalidate your cache programmatically without having to manually touch a dashboard or redeploy your project. The technology has been built into the frameworks to make it easy for developers.
For Next.js:
revalidatePath('/');
All paths are cached by default until you revalidate them. You cannot, however, revalidate a path you are currently on. This means you need to revalidate from a different URL.
Server Actions, for example, are created on the current URL, so you you would need to write your code accordingly. This now works with the app directory as well. You could and probably should also write your code to validate a key or token before the function gets called for security.
SvelteKit uses a token by default. To create the cache, put this at the top of the server file you want to cache:
export const config = {
isr: {
expiration: false,
bypassToken: BYPASS_TOKEN
}
};
And in a different file, you can revalidate it by calling the same file with this header:
await fetch('/', {
headers: {
'x-prerender-revalidate': BYPASS_TOKEN
}
});
Again, you need to revalidate from a different URL. You could also check the fetch results to make sure it completed successfully.
While other frameworks support ISR, I could not find an example of other frameworks supporting on-demand ISR. Again this should be possible on Vercel to manually create this using the Build Output API, but it may not be easy to implement. It would probably need to be added to the Vercel adapter for each framework. This would make it very easy for us developers. We may see this over time.
Since Next.js, a Vercel creation, supports ISR as a framework, other hosting providers have slowly released their take on ISR. React and Next.js are extremely popular outside of Vercel. Nevertheless, none of them support on-demand ISR.
Netlify supports ISR, but doesn’t seem to care to support on-demand revalidation; they would be the provider that we would think would support it. Remember that ISR is not just caching, but the ability to cache and revalidate the cache on the CDN once it expires.
Someone from Netlify wrote a whole article explaining why on-demand revalidation is not great. Isn’t that up to the developer to decide? The whole argument stems on the problems showing an older version of a page. Ultimately this could be fixed given the right conditions.
I think the article was probably written when Netlify decided they didn’t want to give any more resources to create support for it. Either way, the on-demand builders are a vital part of Netlify, which didn’t exist before that time period.
Firebase supports ISR as well, using Firebase Cloud Functions and Fastly CDN under the hood. However, it does not support on-demand revalidation and seems to only work for Next.js, not SvelteKit.
While Cloudflare is no different and has no framework support for ISR, someone created an article about using Incremental Static Regeneration on Cloudflare Workers. Cloudflare Workers are just edge functions that don’t need to cold start like other types of serverless environments.
Cloudflare has a KV Store, which is globally available like a Redis cache would be. Basically you could use some sort of KV Store or Redis to cache your data and revalidate the cache on-demand.
This could be comparably fast to ISR depending on your region, server and database used. Upstash wrote an article comparing Redis and Cloudflare Workers KV.
If you don’t want to be vendor locked-in, this is your best bet. They are all eventually consistent just like ISR on Vercel.
Incremental Static Regeneration is an incredible technology, but it is the on-demand part that really shines. Unfortunately this is only available on Vercel, and only works out of the box with Next.js and SvelteKit.
However, using an edge database could be just as useful. If I’m using a database like Firebase, where I get charged per read, this could save me thousands while being much faster. If I am using a database that doesn’t scale well, this could keep my server in check. If I want to save money on function iterations, using Vercel caching techniques could save me there as well. All in all, it saves you money and makes your site faster.
You should be using it.
]]>You’re going to find a multitude of inspirational design resources all around the web. But you shouldn’t have to use Google or to chase things down across dozens of websites whenever you’re feeling unmotivated or uninspired.
It’s not that these aren’t good resources. It’s just that it’s an inefficient way to seek out inspiration and you could be spending your time on better things.
A better solution is to create a single curated feed of inspiring content. Instagram is a particularly effective tool for this.
You can follow all those websites or blogs you’d otherwise have to scour through one at a time. Don’t stop there though. There are some really awesome Instagram accounts that regularly publish inspiring content for web designers.
Not only does Instagram make it easy to store all your top inspiring sources in one place, but it’ll keep you feeling inspired even when you’re not searching for something specific.
Here are some accounts to get your curated feed started:
This is the Instagram account of Alex A., a UI/UX designer. There are generally two kinds of posts you’ll find here.
One type consists of original 3D illustrations depicting technology in action. Some are static while others are animated. In a recent post, we see different ways to present user controls like a toggle switch and button in a 3D space.
The other type of posts are like the example you see above. They’re part mockup, part prototype. While the user doesn’t have the ability to interact with the prototypes on Instagram, Alex includes markers in the animation that demonstrate how interaction triggers the application’s responses. So even if you can’t interact with the design, it’s still clear how you get from one screen or state to another.
If you’re looking for sleek, modern app design inspiration, give this Instagram account a look.
Austin Kleon is the author of some books you may have heard of, like Steal Like An Artist as well as Show Your Work!
He calls himself a writer who also happens to draw. As you may have noticed on this Instagram page, he has a distinctive style. It’s the same one employed when illustrating his books.
It’s not just his handwriting style that’s unique. His Instagram page displays his creative use of space. Take the diary entries—the ones that look like heavily redacted pages. He could’ve written those poems in a traditional linear style. Instead, he put his black marker to use in isolating those words.
His journals are also great examples of how to creatively use the space you have. As Austin does, he changes case and size, adds underlines and occasionally draws in his own emoticons and illustrations.
If you’re interested in typography-based design, this is a cool account to follow.
The dailywebdesign Instagram account is an awesome place to go for UI and UX design inspiration.
It’s also a good one to follow if you want a little comic relief in your life. The account occasionally shares videos and memes that play on the differences between web design and development (like the crying kid walking down the catwalk in the GIF above).
In terms of what you’ll find here, there’s a great variety of content. For instance, you’ll see next-level design examples. You’ll also find informational GIFs that do things like visually demonstrate the differences in CSS timing values or break down the design of a mobile app splash screen.
What’s more, content goes up regularly. If you want daily inspiration to pop up in your Instagram feed, this page will deliver.
The Nielsen Norman Group UX Instagram page is a really useful one to follow if you’re looking to get better at designing user experiences. The page has a mix of video, text and slider content that break down foundational UX concepts and techniques.
In the example above, you see how a UI element like accordions is handled on this page. In seven slides, NNG demonstrates the benefits, use cases and correct way to design these space-saving, educational components of a web or app page.
NNG also publishes videos (typically under five minutes) that explain various aspects of UX research and testing. One of the most recent ones explained how to ask for users’ personal information when you do usability testing. It might not be the kind of thing you think about when you’re focused on crafting the test, but it’s important to factor it in as it can impact how many users you’re able to get and what sort of information they’re willing to give you.
Whether you’re new to UX or wanting to keep your skills sharp, this page is a must-follow.
Ramotion is an agency that specializes in branding, web and app development, as well as UI and UX design. The Instagram page serves as a living portfolio reel for the company. But not only that, the company turns each project into a case study.
Take the example of the EMI Health mobile app shown above. The video reel looks impressive. However, Ramotion dedicates a couple of posts on their Instagram page to telling the client’s story. The visual part of the posts demonstrate how the agency redesigned the app to fix the major usability issues they were dealing with before.
The account is full of case studies like these that describe the projects they’ve worked on, the client’s challenges and/or goals, and an explanation of how they tackled things.
If you find yourself taking on lots of redesign jobs for mobile and web apps, this account will be useful. While you don’t usually see the “before,” you do get a sense for what improved and effective designs look like. And if you’re simply wanting to get creative when it comes to branding and designing for major enterprises, you’ll find some great examples as well.
The UI Bucket Instagram page has a good mix of content.
There’s some funny stuff here, like a recent video that demonstrates what Norman doors look like in the real world. It’s not a meme shared without any context though. The post explains what Norman doors are, where the term came from and why the concept is terrible for the user experience.
For the most part, this page is all about education. Most of the posts play it straight and deliver information about important design concepts in an intuitive fashion.
The slider that breaks down how to design better inputs in the GIF above is a great example. Not only is the lesson delivered in simple and visually engaging steps, but it’s written out in the post. That way, if you ever want to save certain tips or points, you wouldn’t have to type it out from the image or video.
You don’t need to be a novice designer to find value in this page. Designers and developers of all experience levels will learn something from the content. Or, at the very least, have an ongoing reminder of all the ways to create better designs for the modern user.
There are a number of reasons why you’ll want to save the uidesignpatterns account to your feed.
For starters, there’s a great mix of design examples posted to the page. You’ll find websites, mobile apps and web apps—all for different kinds of companies, too.
Another reason why this page is useful is because of the comments. Many times with an account like this, commentary is relegated to one-word responses about how “awesome” the design is. On this page, followers have a tendency to give honest feedback. And I think there’s a lot that can be learned by seeing what other designers have to say about designs or the way they’re presented.
While everything on this page looks great on its face, the main goal of the account isn’t just to visually inspire. It’s meant to inspire better user experiences as well. So it’s good that followers are willing to chime in when they think something needs improvement.
The stated goal of the UI Gradient is to help you learn UI/UX design. And it does it well.
What I especially like about this account is that it’s not all about the finished product. It zeroes in on the microscopic details and decisions that designers have to pay attention to.
In the example above, we see a slider post that proposes Anderson Grotesk as a free alternative to Helvetica. Fonts aren’t the only micro topics covered here. You’ll find content related to microcopy, scroll bar design, UX laws and more.
While this account is definitely great at covering all the minutiae related to UX design, it’s also a solid place to find alternative resources so you can work faster while remaining creative.
Many of these inspiring Instagram accounts are run by web designers or design studios. This one is run by Uxcel. It’s an app that teaches UX design, tests designers’ knowledge and then helps them find relevant work.
While you might find the occasional call-to-action to take one of their courses or tests, these messages aren’t very prominent. The primary focus of this account is to test designers on their knowledge while simultaneously teaching them about and reinforcing important design concepts, like HTML coding, accessibility in design and microcopy creation.
There’s also the occasional humorous meme thrown in, like the one you see at the start of the GIF above. If you can’t laugh about the struggles of work as a web designer, you’re going to have a hard time getting through each day. So it’s nice when accounts like these inject some humor into our feeds and lives.
Bottom line: If you’re looking for micro challenges and funny memes about client management, follow this page.
The list of Instagram accounts above mainly provide visual inspiration—beautiful designs, creative product solutions and innovative user experiences. However, there are other ways to get and stay inspired as a web designer.
What inspires you? Perhaps you want to follow your favorite podcaster for more work-related insights and posts. Or you want to follow your favorite artist or photographer who puts out posts that make you smile. The feed is yours to curate. Fill it with accounts and content that keep you excited about your work and feeling inspired to try something new or different.
]]>GraphQL is a query language for making requests to APIs. With GraphQL, the client tells the server exactly what it needs and the server responds with the data that has been requested. To get a better understanding of some of the things that make GraphQL special, be sure to read the earlier article: GraphQL vs. REST—Which is Better for API Design?.
There are two sides to using GraphQL: as an author of a client or frontend web application, and as an author of a GraphQL server. In this chapter, we’re going to focus entirely on the latter—by going through a simple exercise on how we can create a GraphQL server API.
It’s important to always keep in mind that GraphQL is a specification, not a direct implementation. This means that a GraphQL API can be created in many different programming languages— Ruby, Java, Python and so on. We’ll focus on creating a GraphQL API with JavaScript and we’ll use the Apollo Server library to help us achieve this.
We’ll be creating a GraphQL API with Node.js. We assume you have Node and the
npm
package manager already installed.
We’ll begin with an empty folder called graphql-api/
.
graphql-api/
In the graphql-api/
folder, we’ll create a package.json
file. The package.json
file is where one can provide metadata about a Node application, list
the packages the app depends on, and create scripts to run the app.
graphql-api/
package.json
In the package.json
file, we’ll introduce name
and version
fields to describe the project we intend to create. We’ll name our project “graphql-api” and specify the version as “0.1.0”.
{
"name": "graphql-api",
"version": "0.1.0"
}
We’ll now install the @apollo/server
package and the graphql
JavaScript library in our project.
@apollo/server
will be used to create our GraphQL API and help instantiate a server.graphql
is a peer dependency needed by the Apollo Server package.In our terminal, we’ll install these new packages as application dependencies.
graphql-api $: npm install @apollo/server graphql
We’ll create an index.js
file in our project directory which will be the location where we create our GraphQL API.
graphql-api/
index.js
package.json
In the index.js
file, we’ll prepare some mock data that we’ll use to query from our GraphQL API. Our mock data will be an array of listings where each listing item will have the fields—id
,
title
and city
.
const listings = [
{ id: "001", title: "Large ensuite condo", city: "Toronto" },
{ id: "002", title: "Beverly Hills Mansion", city: "Los Angeles" },
{ id: "003", title: "Small chic bedroom", city: "Dubai" },
];
The first thing we’ll do to create our GraphQL API is prepare the GraphQL schema. A GraphQL schema describes all the possible data that can be requested. We can think of a schema as the blueprint of a GraphQL API. We’ll create the schema with the GraphQL Schema Language which is a simple and language-agnostic syntax.
The GraphQL schema is where we specify the types of fields we’ll want to be queried from the API. We’ll construct a schema where we’ll be able to query a listings
field which will return a list
of listings from the mock data array we’ve prepared.
With the schema we have in mind prepared, our index.js
file will now look like the following:
const listings = [
{ id: "001", title: "Large ensuite condo", city: "Toronto" },
{ id: "002", title: "Beverly Hills Mansion", city: "Los Angeles" },
{ id: "003", title: "Small chic bedroom", city: "Dubai" },
];
const typeDefs = `#graphql
type Listing {
id: String!
title: String!
city: String!
}
type Query {
listings: [Listing!]!
}
`;
What’s happening here?
typeDefs
.Listing
object type that represents the shape of a single listing object. The Listing
object is to have three fields of which all are to be strings—id
, title
and city
.
Query
object type in our schema which represents the root-level fields that can be queried from the client. We’ve stated listings
to be a field that can be
queried and when resolved will return a list of Listing
object types.Our schema simply represents the shape of data that can be queried. To define how the fields in the schema get processed, we’ll create our GraphQL resolvers. Resolvers in a GraphQL API are functions responsible for resolving a GraphQL operation to data.
We’ll specify a resolvers
map to dictate how the listings
field is to resolve. We’ll have the listings
field simply return the mock listings
array we’ve created.
const resolvers = {
Query: {
listings: () => listings,
},
};
With our schema, resolvers and mock data defined, we can now create our Apollo Server instance. To do so, we’ll import and use the ApolloServer
constructor function from the @apollo/server
library.
The ApolloServer
constructor takes an options
object that requires us to pass values for a typeDefs
field (i.e., the GraphQL schema) and a resolvers
field (i.e., the map of functions that resolve to concrete data).
We’ll run the ApolloServer
constructor, pass the typeDefs
and resolvers
constant values we’ve created, and assign the result to a constant called
the server
.
const { ApolloServer } = require("@apollo/server");
// ...
const server = new ApolloServer({ typeDefs, resolvers });
With the Apollo server
instance now available to us, we can start our web server by running the startStandaloneServer()
function available to us from the @apollo/server/standalone
package.
The startStandaloneServer()
function is an asynchronous function that when resolved receives the url
of the running server. We’ll log a message to the console with this url
value.
Our index.js
file in its entirety will now look like the following.
import { ApolloServer } from "@apollo/server";
import { startStandaloneServer } from "@apollo/server/standalone";
const listings = [
{ id: "001", title: "Large ensuite condo", city: "Toronto" },
{ id: "002", title: "Beverly Hills Mansion", city: "Los Angeles" },
{ id: "003", title: "Small chic bedroom", city: "Dubai" },
];
const typeDefs = `#graphql
type Listing {
id: String!
title: String!
city: String!
}
type Query {
listings: [Listing!]!
}
`;
const resolvers = {
Query: {
listings: () => listings,
},
};
const server = new ApolloServer({ typeDefs, resolvers });
startStandaloneServer(server).then(({ url }) => {
console.log(`Server is running at ${url}`);
});
In our terminal, we’ll run node index.js
within the graphql-api/
folder to run the contents of the index.js
file and start our web server.
graphql-api $: node index.js
With the server running appropriately, we should see the expected message to the console.
Our GraphQL API is now running on http://localhost:4000
!
If we navigate to http://localhost:4000
in our browser, we’ll launch the Apollo Sandbox.
Apollo Sandbox is a web-based tool for running GraphQL operations.
With the sandbox, we can query for the listings field and specify we want the id
, title
and city
for each listing object to be returned.
query {
listings {
id
title
city
}
}
When we run the query, we’ll get all the listing objects from our mock data with the fields we’ve specified.
What if we only wanted to retrieve the id
of each listing? We’ll have our query reflect this by only querying the id
field within the listings
parent field.
query {
listings {
id
}
}
And that’s it! We’ve created a GraphQL API in a matter of minutes with which we’re able to have the client dictate what data it wants from the server.
Test the above in this Codesandbox link.
In this tutorial, we delved into the fundamentals of creating a GraphQL API using Apollo Server. With Apollo Server, we were able to define our schema, implement resolvers and finally run our GraphQL server—enabling us to query our mock listings through the Apollo Sandbox UI.
Stay tuned for our follow-up article that takes a look into how we can query a GraphQL API in a React application!
]]>Today, when users need to work with PDF files, they often have to download applications or browser plugins. Controlling how users interact with the PDF is not an easy task.
If we offer the PDF as content, users can download it and interact using controls provided by the browser or the PDF itself. However, some businesses may want to control how users engage with the PDF, to provide a better experience or limit downloading under certain conditions.
Building a solution like this requires significant effort on both the backend and frontend. However, what if I told you that you could manage PDF interactions with just a few lines of code in the Angular PDF Viewer?
I can explain every feature about Progress Kendo UI for Angular PDFViewer, but I think the best way to learn about it and see it in action is with a real-world scenario.
We are developing an app for a university. The administration wants to provide students with the following features:
These features must be demo-ready today. Sound like too much work? There’s one more feature: We want to display a banner if a reader exceeds three pages.
No worries! We have Kendo UI for Angular PDFViewer to handle these challenges, leaving us a few hours to enjoy the NBA game.
To meet the university’s needs, we will utilize the Kendo UI for Angular PDFViewer. This robust component offers a plethora of features that, when integrated with Angular, provide a comprehensive solution.
First, set up your Angular application with the command ng new elearning-platform
.
ng new elearning-platform
cd elearning-platform
npm install
Kendo UI offers a schematics command to register its Angular PDF Viewer.
ng add @progress/kendo-angular-pdfviewer
i Using package manager: npm
√ Found compatible package version: @progress/kendo-angular-pdfviewer@14.0.0.
√ Package information loaded.
The package @progress/kendo-angular-pdfviewer@14.0.0 will be installed and executed.
Would you like to proceed? Yes
√ Packages successfully installed.
UPDATE src/app/app.module.ts (515 bytes)
UPDATE package.json (1708 bytes)
UPDATE angular.json (3165 bytes)
√ Packages installed successfully.
UPDATE src/main.ts (259 bytes)
UPDATE tsconfig.app.json (294 bytes)
UPDATE tsconfig.spec.json (300 bytes)
We’ve already set up, so let’s start defining the layout and interface for both the users and the PDF Viewer.
Start by removing the default HTML from app.component.html. Add in the following HTML elements:
<h1>Welcome to E-learning Platform</h1>
<h2>You can read online and save the state, also download the book (if you agree with the terms)</h2>
<select>
<option value="angular.pdf">Angular</option>
<option value="signals.pdf">Signals</option>
</select>
<label for="acceptTerms">
Do you agree with the terms of download?
</label>
<input id="acceptTerms" type="checkbox" />
To add a kendo-pdfviewer
and a “paywall” banner, import the PDFViewerModule
, module in the imports section.
import { Component } from '@angular/core';
import { CommonModule } from '@angular/common';
import { RouterOutlet } from '@angular/router';
import {PDFViewerModule} from "@progress/kendo-angular-pdfviewer";
@Component({
selector: 'app-root',
standalone: true,
imports: [CommonModule, RouterOutlet, PDFViewerModule],
templateUrl: './app.component.html',
styleUrl: './app.component.css'
})
export class AppComponent {
title = 'elearning-platform';
}
Next, add the kendo-pdfviewer
and the pay-wall
elements; these elements should only appear when the user selects an option from the dropdown list. To simplify, wrap them in an ng-container
.
<ng-container>
<kendo-pdfviewer >
</kendo-pdfviewer>
<div class="pay-wall">
<h1>You reach limit to read </h1>
<button>Close</button>
</div>
</ng-container>
Once saved, your layout should look like:
We now have a layout without any interaction. Before we continue, create two PDF files—named exactly as they appear in the dropdown (angular.pdf and signals.pdf)—into the assets directory.
One of the main features is the ability to remember where users left off when they return to the platform. This means that when users open a PDF, they should be taken to the exact page where they left off during their last session.
The easiest way to achieve this is by using local storage in the browser. However, to reduce the amount of code in app.component
, we will create a service to encapsulate the logic for saving and storing the page number.
To generate this service, use the Angular CLI command ng g s services/reader
.
ng g s services/reader
CREATE src/app/services/reader.service.spec.ts (357 bytes)
CREATE src/app/services/reader.service.ts (135 bytes)
Open the reader.service.ts file and perform the following actions:
storageKey
and initialPage
variables.savePage
and getPage
. We will connect these methods to the pdf-kendo-viewer
events to save and load the page.public assetURL = 'http://localhost:4200/assets/';
private currentPage: number = 1;
private storageKey: string = 'book-page';
savePage(page: number) {
localStorage.setItem(this.storageKey, page.toString());
}
getPage() {
const savedPage = localStorage.getItem(this.storageKey) || this.currentPage;
return +savedPage;
}
Perfect, we have the first version of our reader.service
. Let’s connect it with the HTML markup and the Kendo UI PDF Viewer.
Read more about localstorage.
Now that we have our ReaderService
ready, the next step is to enable the first interaction and display the PDF. To do this, we’ll need to work within the app.component.ts file and inject the ReaderService
.
Here’s what we’ll cover:
ReaderService
into the component.pdfAssetUrl
and bookName
variables.selectBook
method to update pdfAssetUrl
based on the selected book from the dropdown list.pdfAssetUrl
and bookName
to the PDF Viewer.First, import the ReaderService
and inject it into the component using Angular’s dependency injection.
import { Component, inject } from '@angular/core';
import { ReaderService } from './services/reader.service';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss'],
})
export class AppComponent {
title = 'elearning-platform';
readerService = inject(ReaderService);
....
Next, let’s declare the necessary variables and implement the selectBook
method. In this method, we’ll update the pdfAssetUrl
by combining readerService.assetUrl
and bookName
.
Here’s how:
export class AppComponent {
title = 'elearning-platform';
readerService = inject(ReaderService);
pdfAssetUrl = '';
bookName!: string;
selectBook() {
this.pdfAssetUrl = `${this.readerService.assetURL}${this.bookName}`;
}
}
How do we connect these variables with the methods and react to changes? Well, Angular provides several ways to listen for events and react to changes. To react to the change
method in the select element, we can use the (change)
event and link it to the selectBook
function.
How do we link the value of the select
element to the bookName
variable? Don’t worry, Angular provides ngModel, which is part of the FormsModule. It helps us react to changes through two-way data binding.
<select (change)="selectBook()" [(ngModel)]="bookName">
<option value="angular.pdf">Angular</option>
<option value="signals.pdf">Signals</option>
</select>
To use ngModel, import FormsModule in app.module or include it in the imports for standalone components.
Next, we want to respond to changes in order to load the PDF into the kendo-pdfviewer
component. To achieve this, we bind the url
and saveFileName
properties.
The saveFileName
property allows us to define the name of the file when the user clicks on the download toolbar.
The url
property is one of several ways to bind the PDF to the component; in our case, we provide the URL where the PDF is stored.
Read more about Angular’s PDF Viewer and its data-binding capabilities in the PDFViewer Data-Binding documentation.
The final code looks like:
<kendo-pdfviewer
[saveFileName]="bookName"
[url]="pdfAssetUrl">
</kendo-pdfviewer>
Save your changes, then reload the page and interact with the dropdown menu to load different PDFs.
Learn more about two-way binding in Angular.
Yes, we’ve successfully loaded the PDF! However, there are still some features to complete, such as saving the page position and controlling the download options. Let’s get to it!
We have a few challenges to tackle:
Think it’s too much work? The Kendo UI PDF Viewer simplifies the process with event listeners and just a few lines of code.
By default, the Kendo PDFViewer displays all options in the toolbar. However, we want more control and wish to show the download button only if the user agrees to the terms and conditions.
The Kendo UI PDF Viewer allows us to define a list of options in the toolbar by providing an array of PDFListViewerTool keys. We can specify this list in the reader.service.ts and append it with a new array containing the desired toolbar options.
Lastly, implement an allowDownload
method that adds the “download” key to the toolbar options array. This way, when the user accepts the terms and conditions, the “download” option will become available.
The final code will look something like this:
import { Injectable } from '@angular/core';
import {PDFViewerTool} from "@progress/kendo-angular-pdfviewer";
export class ReaderService {
public assetURL = 'http://localhost:4200/assets/';
private currentPage: number = 0 ; // Initialize with the default page
private storageKey: string = 'book-page';
//list of new list of options
public toolbar: PDFViewerTool[] = ['search', 'selection', 'print', 'pager'];
savePage(page: number) {
localStorage.setItem(this.storageKey, page.toString());
}
getPage() {
const savedPage = localStorage.getItem(this.storageKey) || this.currentPage;
return +savedPage;
}
allowDownload(acceptTerms: boolean) {
if (acceptTerms) {
this.toolbar.push('download');
} else {
this.toolbar.pop();
}
}
}
Next, declare a new variable named toolbarOptions
and bind with readerService.toolbar
:
toolbarOptions = this.readerService.toolbar;
Then, link the tools
property in the kendo-pdfviewer component to toolbarOptions
:
<kendo-pdfviewer
[tools]="toolbarOptions"
[saveFileName]="bookName"
[url]="pdfAssetUrl">
</kendo-pdfviewer>
After saving your changes, you’ll notice that the toolbar now only displays the first four options, as specified by toolbarOptions
.
To display the “download” option only when the user agrees to the terms, we need to take a few steps.
First , create a variable called acceptTerms
. Using Angular’s ngModel
and the ngModelChange
event, we can then call a new method named activateDownload
. This method will trigger theallowDownload
method from our service to update the toolbar options.
export class AppComponent {
....
acceptTerms: boolean = false;
activateDownload() {
this.readerService.allowDownload(this.acceptTerms);
}
}
In the app.component.html
file, we can use ngModel
to bind the acceptTerms
variable and listen for changes with the (ngModelChange)
event. This event will trigger the activateDownload
method when the user interacts with the checkbox.
<input [(ngModel)]="acceptTerms" id="acceptTerms" type="checkbox" (ngModelChange)="activateDownload()"/>
Save the changes. Now the “Download” option should appear in the toolbar when you check the “Agree with the Terms of Download” checkbox, and it should disappear when you uncheck it.
Perfect, we now have control over the toolbar! Next, let’s delve into the event-handling capabilities of the Kendo UI PDF Viewer.
This is the final and most exciting part, as we can create a seamless user experience. The Kendo UI PDF Viewer provides two handy events: load
and pageChange
. We’ll attach these events to methods in our reader.service
to either load or save relevant information.
To do this, open app.component.ts
and create a method called saveCurrentPage
that takes a PDFViewerPageChangeEvent
as an argument. This event object contains information about the current page within the PDF.
saveCurrentPage($event: PDFViewerPageChangeEvent) {
this.readerService.savePage($event.currentPage);
}
In your HTML file, associate the pageEvent
with the saveCurrentPage
method, and set the height of the PDF viewer to 600 pixels.
<kendo-pdfviewer
[tools]="toolbarOptions"
[saveFileName]="bookName"
[url]="pdfAssetUrl"
style="height: 600px;"
(pageChange)="saveCurrentPage($event)">
</kendo-pdfviewer>
After saving your changes, the service will store the book-page key in the local storage.
To jump to a specific page when the user loads the PDF and to also access the Kendo UI for Angular PDFViewer, we can use Angular’s ViewChild
decorator. This allows us to get access to the component and listen for the load
event.
First, add a template reference to the Kendo UI PDF Viewer component, and then link the load
event with a new method called loadPage()
.
<kendo-pdfviewer
#pdfViewer
[tools]="toolbarOptions"
[saveFileName]="bookName"
[url]="pdfAssetUrl"
style="height: 600px;"
(pageChange)="saveCurrentPage($event)"
(load)="loadPage()"
>
</kendo-pdfviewer>
In the app.component.ts
, declare a ViewChild
pdfViewer with the same name as the template reference to get access to the PDFViewerComponent reference and bookPage
to store the getPage
from the service:
@ViewChild('pdfViewer') pdfViewer!: PDFViewerComponent;
bookPage = this.readerService.getPage();
Add the loadPage
method, which utilizes the pdfViewer
's scrollToPage
function to navigate to the page returned by readerService.getPage
.
loadPage() {
this.bookPage = this.readerService.getPage();
this.pdfViewer.scrollToPage(this.bookPage);
}
Save the changes, and the PDF will jump to the last page you viewed. Yeah!
To put the cherry on top, we want to block the UI when the user reaches the page limit and prevent them from moving to the next page.
Open app.component.html
. First, we’ll display the kendo-pdf-viewer
and the paywall only when a book is selected. Add an *ngIf
directive to the ng-container
to watch for bookName
. Also, use another *ngIf
directive to display the paywall based on the showMessageWall
variable.
<ng-container *ngIf="bookName">
...
<div class="pay-wall" *ngIf="showMessageWall">
<h1>You reach limit to read </h1>
<button (click)="showMessageWall = false"> close</button>
</div>
</ng-container>
In app.component.ts
, add two new variables:
pageLimit = 2;
showMessageWall = false;
Add a new method to validate whether the currentPage
exceeds the pageLimit
. If true, set showMessageWall
to true and use the scrollToPage
method to jump back to pageLimit
.
private canReadMore(currentPage: number) {
if (currentPage > this.pageLimit) {
this.pdfViewer.scrollToPage(this.pageLimit);
this.showMessageWall = true;
} else {
this.showMessageWall = false;
}
}
Finally, invoke this method in our saveCurrentPage
function to trigger it every time the user changes the page.
saveCurrentPage($event: PDFViewerPageChangeEvent) {
const { currentPage } = $event;
this.readerService.savePage(currentPage);
this.canReadMore(currentPage);
}
Go ahead and save them. After that, we see the entire workflow functioning as expected: the Kendo UI for Angular PDFViewer loading, the toolbar customization, and the paywall appearing when the page limit is reached.
We’ve demonstrated how Kendo UI for Angular PDFViewer can save you both time and effort. With out-of-the-box functionalities like toolbar customization and event handling, you can quickly build a feature-rich PDF viewer with minimal code. The ability to save user preferences and integrate paywall features adds an extra layer of sophistication to your application, enhancing the user experience.
By using Kendo UI for Angular PDFViewer, you’re not just adding a tool to view PDFs; you’re integrating a powerful asset that elevates the entire user interface, while also speeding up your development process.
This is just the beginning of what you can do with Kendo UI for Angular PDFViewer. Check out the official documentation with many examples and customizations, and don’t forget you can try Kendo UI for free.
In the previous article of this Blazor Basics series, we learned how to create HTML forms and capture user data using Blazor.
In this article, we will learn how to validate user input and provide visual feedback in case of an error.
You can access the code used in this example on GitHub.
The most straightforward approach to implementing basic form validation is using data annotations.
The System.ComponentModel.DataAnnotations
namespace is well-known to experienced .NET developers and can also be used to validate Blazor forms.
We create a simple user registration form with three fields: a username, a password and a password confirmation. The two password fields should be input fields of type password and not show the user input on the screen.
Let’s start with the data model class.
public class User
{
[Required]
[StringLength(16, MinimumLength = 4,
ErrorMessage = "The username must be between 4 and 16 characters.")]
public string? Username { get; set; }
[Required]
[StringLength(24, MinimumLength = 10,
ErrorMessage = "The password must be between 10 and 24 characters.")]
public string? Password { get; set; }
[Required]
[StringLength(24, MinimumLength = 10,
ErrorMessage = "The password must be between 10 and 24 characters.")]
public string? PasswordConfirmation { get; set; }
}
We create a class named User
and add a property for each information we want to receive from the user. Notice the attributes used on each property definition.
Make sure to add @using System.ComponentModel.DataAnnotations;
at the top of your Blazor component to import the required namespace.
We use the Required
attribute to mark each property as mandatory. Next, we use the StringLength
attribute to limit the number of characters for the Username
and Password
properties.
We also provide a custom error text as the third argument of the StringLength
data annotation that will be displayed if the validation condition is not met.
Next, we define a UserModel
property that contains an instance of the data class we defined. It will hold the user input.
public User? UserModel { get; set; }
We override the OnInitialized
lifecycle method to create and assign an instance of the User
class to the UserModel
property.
We also create an empty Submit
method that we can bind to the form.
The complete code block of the UserForm
component looks like this:
@code {
public User? UserModel { get; set; }
protected override void OnInitialized()
{
UserModel = new User();
}
public void Submit()
{
}
public class User
{
[Required]
[StringLength(16, MinimumLength = 4,
ErrorMessage = "The username must be between 4 and 16 characters.")]
public string? Username { get; set; }
[Required]
[StringLength(24, MinimumLength = 10,
ErrorMessage = "The password must be between 10 and 24 characters.")]
public string? Password { get; set; }
[Required]
[StringLength(24, MinimumLength = 10,
ErrorMessage = "The password must be between 10 and 24 characters.")]
public string? PasswordConfirmation { get; set; }
}
}
Next, we create the form using the following Blazor component template code:
<EditForm Model="@UserModel" OnValidSubmit="@Submit">
<DataAnnotationsValidator />
<ValidationSummary />
<InputText @bind-Value="UserModel!.Username" />
<InputText type="password" @bind-Value="UserModel!.Password" />
<InputText type="password" @bind-Value="UserModel!.PasswordConfirmation" />
<button type="submit">Register</button>
</EditForm>
With around a dozen lines of code, we can define a form, bind it to the UserModel
property, bind the OnValidSubmit
method and show three input fields.
Notice the use of @bind-Value
for the InputText
component that binds the field to a property defined in the code section of the Blazor component.
We add the DataAnnotationsValidator
component within the EditForm
component to enable data annotations for form input validation.
The ValidationSummary
component below the DataAnnotationsValidator
component shows a summary of all validation errors. We will take a look at them shortly. You can place it anywhere within the EditForm
component.
The whole component, including a few more divs and CSS to make it look decent, looks like this:
@using System.ComponentModel.DataAnnotations;
<h3>Register User</h3>
<EditForm Model="@UserModel" OnValidSubmit="@Submit">
<DataAnnotationsValidator />
<ValidationSummary />
<div>
<label style="width: 200px">
Username:
<InputText @bind-Value="UserModel!.Username" />
</label>
</div>
<div>
<label style="width: 200px">
Password:
<InputText type="password" @bind-Value="UserModel!.Password" />
</label>
</div>
<div>
<label style="width: 200px">
Password Confirmation:
<InputText type="password" @bind-Value="UserModel!.PasswordConfirmation" />
</label>
</div>
<div style="margin-top: 10px;">
<button type="submit">Register</button>
</div>
</EditForm>
@code {
public User? UserModel { get; set; }
protected override void OnInitialized()
{
UserModel = new User();
}
public void Submit()
{
}
public class User
{
[Required]
[StringLength(16, MinimumLength = 4,
ErrorMessage = "The username must be between 4 and 16 characters.")]
public string? Username { get; set; }
[Required]
[StringLength(24, MinimumLength = 10,
ErrorMessage = "The password must be between 10 and 24 characters.")]
public string? Password { get; set; }
[Required]
[StringLength(24, MinimumLength = 10,
ErrorMessage = "The password must be between 10 and 24 characters.")]
public string? PasswordConfirmation { get; set; }
}
}
Now, let’s build and run the application and test the form.
The form contains the three defined input fields for a username, password and password confirmation.
When we enter a username fulfilling the validation requirements, the field turns green. It is a default behavior that we could change in code.
When we enter a too-short password, we get the custom error message defined in the StringLength
data annotation attribute for the password field.
The ValidationSummary
component doesn’t offer many customizing options. If you prefer a custom behavior, you need to access the EditContext
and implement a custom error presenter.
If you want to implement custom validation rules, you can inherit from the ValidationAttribute
type in the System.ComponentModel.DataAnnotations
namespace and implement its IsValid
method.
Consider the following implementation:
using System.ComponentModel.DataAnnotations;
namespace BlazorFormValidation;
public class SupportedUsername : ValidationAttribute
{
protected override ValidationResult IsValid(object value,
ValidationContext validationContext)
{
var username = (string)value;
if (username == "Bill Gates")
{
return new ValidationResult(
"You cannot use the reserved username 'Bill Gates'.",
new[] { validationContext.MemberName });
}
return ValidationResult.Success!;
}
}
We create a class inheriting from the ValidationAttribute
class of the System.ComponentModel.DataAnnotations
namespace.
Next, we override the IsValid
method of the base class. We first cast the value to the string type and assign it to a variable before checking the content for “Bill Gates”.
If the username equals Bill Gates, we return a ValidationResult
object, including a custom error message and information to the validated model property.
If the username doesn’t match the rule, we return Validation.Success
, which means that the validation rule has passed and no error will be displayed.
We can attach the UsernameSupported
attribute to the Username
property of the User model class:
[Required]
[SupportedUsername]
[StringLength(16, MinimumLength = 4,
ErrorMessage = "The username must be between 4 and 16 characters.")]
public string? Username { get; set; }
When we enter Bill Gates as the username, the error will be shown.
Hint: You can also inject services registered with the dependency injection container at the startup of the application. However, be aware that you have to use validationContext.GetRequiredService<MyValidationService>();
instead of using constructor injection.
Sometimes, you want to show all error messages at the same position on the screen. That’s where using the ValidationSummary
type makes it very simple.
However, if you prefer displaying validation errors below or adjacent to an input field, you run out of luck with the ValidationSummary
component.
However, the ValidationMessage
component provides exactly what we need.
Take a look at the following example:
<EditForm Model="@UserModel" OnValidSubmit="@Submit">
<DataAnnotationsValidator />
@* <ValidationSummary /> *@
<div>
<label style="width: 200px">
Username:
<InputText @bind-Value="UserModel!.Username" />
<ValidationMessage For="@(() => UserModel!.Username)" />
</label>
</div>
@* Code omitted... *@
</EditForm>
We comment or remove the ValidationSummary
component and instead use the ValidationMessage
component to display the error message for each specific input field.
In this case, we set the For
property of the ValidationMessage
to an anonymous method, returning the property of the model for which we want to display the error.
The syntax is a bit tricky, but it is required to provide a method because the error message will be evaluated at runtime.
Notice that you can move the component anywhere within the EditForm
component. For example, you can have two input fields and display both errors together after those two input fields.
Compared to the ValidationSummary
component, the ValidationMessage
component provides much more flexibility but requires more code and a more granular definition.
We can see the different validation error output when we build and run the application again.
The validation error messages are now displayed below each input field (according to the location of the ValidationMessage in the component template).
Form input validation is a common and essential developer task when building modern data-driven web applications. Blazor offers many different options to validate a data model.
With data annotations, we have a simple but extendable approach that we used throughout this article. However, you can directly access the EditContext
(wrapped by the EditForm
component) and gain more control over the validation process.
The Microsoft documentation is a great starting point to learn more about advanced ways to implement custom form validation.
You can access the code used in this example on GitHub.
If you want to learn more about Blazor development, you can watch my free Blazor Crash Course on YouTube. And stay tuned to the Telerik blog for more Blazor Basics.
]]>