Search Engine Optimization with Next.js

Anton Biriukov
5 min readMar 20, 2021

In this article, I will cover four steps that will help improve your website SEO.

Photo by Marten Newhall on Unsplash

When it comes to building any modern website, you will most definitely have to work on the Search Engine Optimization for it, because you want other people to be able to find your website and utilize it. Let’s take a closer look at what it is and how to make it work.

Search Engine Optimization is a term used to describe the process of making your website better for search engines. In order for them to be able to provide better and faster search results, they use crawlers — automated software which:

  • searches (‘crawls’) the internet constantly for new or updated web pages
  • indexes those websites — saves their content and location

There is a number of search engines available, with the most popular being Google, Bing, and Yandex. In this article, we will mostly focus on Google, which contributes to over 90% of all search requests on the web. Considering this number, just making sure your website is properly indexed by Google would already be a big win and should definitely be the first thing on the to-do list.

Verify Your Website (Domain)

Google provides a dedicated console to manage and review the SEO performance of your website. It is quite a powerful tool, which allows you to collect analytics and find ways to improve your SEO. The first step to start using this console is to verify your website in it:

It is fairly simple to use and provides all necessary instructions. Once verified, you will be able to access a variety of tools.

Enable Crawlers

Firstly, it is important to make sure that search engines’ crawlers are able to access your website. One of the most widely used ways to do so is with the robots.txt. Through this file, owners of a website can specify which crawlers are permitted to look for and index which pages. You can get more information about it on the official website or in this guide by Google. Ultimately, it takes the following form:

# Specify allowed crawlers (e.g Googlebot, Slurp, Yandex)
User-agent: *
# Specify which pages the above-mentioned engine should crawl
Allow: /
# Specify which pages the above-mentioned engine should not crawl
Disallow: /search
# Specify how often should crawlers perform searches for new/updated # pages on your domain (in seconds)
Crawl-delay: 1

This file should sit in the root directory of your website. In Next.js, the ./public folder It is important to note that although most crawlers will follow the instruction given in this file, it does not prevent them from crawling the pages if they would want to. If you wish to keep certain pages private, you should consider password-protecting them.

In fact, most popular websites will have the robots.txt file. For example, you can take a look at https://twitter.com/robots.txt, https://www.google.com/robots.txt or https://github.com/robots.txt.

Create Sitemap

A sitemap is a file that essentially contains a list of all of the pages on your website. Google provides a comprehensive overview of it in their guide. In order to generate a sitemap for our Next.js website, we need to consider what types of routes we have (static, dynamic). We also need to decide how often do we want to update it or which events should trigger the update. Once generated in .xml format, we need to compress it and store it in the root directory of the website (./public folder for Next.js apps).

Hyouk has a great guide on how to implement sitemap generation for Next.js website. In case you use GitHub to store your project, he also covers how to set up GitHub Actions to trigger new sitemap generation on each deployment to master the branch.

Essentially, you can configure CI/CD in a way that works best for you. For instance, you could also update the sitemap on every new release. Lastly, you Hyouk also provides an easy way to poke Google to tell it to re-index your website again:

$ curl http://google.com/ping?sitemap=http://website.com/sitemap.xml

Lastly, it is also a good practice to add a link to the sitemap file in the robots.txt file:

# Sitemap Link
Sitemap: https://twitter.com/sitemap.xml

Generate Meta Tags

Meta tags are used to specify information about the authors of the website, site name, description, page title, keywords and more. Some of them should be assigned on a page-to-page basis, while some should be assigned globally. In Next.js, such attributes should be specified in ./pages/_document.tsx file. Below is an example of global attributes and a link to the corresponding file for the Telescope project.

<meta name="description" content="Description of your website" />                                 <meta name="author" content="Author's name" />                                 <meta name="keywords" content="List, of, keywords" />                                 <meta name="application-name" content="Application name" />

The canonical link allows you to specify the canonical URL for each page on your website and should be used on a page-to-page basis. For example, you may have development, testing and production environments deployed to https://dev.your-website.com, https://test.your-website.com, and https://your-website.com correspondingly. In this case, you want to tell crawlers that all identical routes under these domains should be treated as duplicate with the production one being canonical. In Next.js this link should be placed in the <head> tag of each page, which ./pages/index.tsx file would be best for:

<head>
<link rel="canonical" href="https://dev.your-website.com" />
</head>

Social meta tags provide you with a great way to enrich links to your website posted on social media websites or forwarded in private messages. There is a number of markup tag systems used, with the most common ones being Facebook’s Open Graph and Twitter. Essentially this protocols allow you to specify information such as web page title, description, image, etc. to enrich the link to your website with. See this file from Telescope project for reference. In short, you can add the following to the <head> in your ./pages/index.tsx:

<meta property="og:url" content={currentUrl} />                             <meta property="og:title" content={pageTitle} />                             <meta name="twitter:title" content={pageTitle} />

Some of them should be assigned on a page-to-page basis (like above ones), while some should be assigned globally. For instance, on Telescope we use the following tags in ./pages/_document.tsx:

{/* Facebook's Open Graph */}
<meta property="og:type" content="website" /> <meta property="og:site_name" content={title} /> <meta property="og:description" content={description} /> <meta property="og:image" content={image} /> <meta property="og:image:alt" content={imageAlt} /> <meta property="og:locale" content="en_CA" />
{/* Twitter */}
<meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:description" content={description} /> <meta name="twitter:image" content={image} /> <meta name="twitter:image:alt" content={imageAlt} />

Once specified and deployed, you can verify these tags using Facebook Sharing Debugger and Twitter Card Validator:

Facebook Sharing Debugger
Twitter Card Validator

If your website also has accounts on Facebook or Twitter, you can also link those by using twitter:site and fb:app_id. See https://developer.twitter.com/en/docs/twitter-for-websites/cards/overview/markup and https://developers.facebook.com/docs/sharing/webmasters/ for official details.

--

--

Anton Biriukov

Enthusiastic Junior Software Developer striving for discoveries & curious about technology