Google Releases Its Search News Series on Crawling and Indexing Updates, And Link Building

Crawling and Indexing

On January 21, 2020, Google released its regular summary of what has been happening around its search systems, specifically from website owners and publishers. In this latest episode, Google’s significant updates were covering two fundamental aspects of Google search, namely crawling and indexing, as well as another relevant part of search results, namely, links. For this and more, read on to keep abreast of what has been happening.

Crawling and Indexing: Definitions.

First and foremost, crawling is when google systems look at pages on the web by following the links that it sees there to find other related web pages. On the other hand, indexing is when google systems try to process and understand the content of those web pages. Both of these processes have to work together to help Google contextualise the content on web pages.

Crawling Updates

While Google has been crawling the web for decades, there is always something that they have been working on to make it easier, faster, and better understandable for search items. In the search console, Google recently launched an Updated Crawl Stats Report. Google search console is a free tool that website owners and publishers can use to access information on how Google search sees and interacts with their websites.

This report gives site owners information on how Googlebot crawls their websites. The report covers a number of requests based on aspects such as response code and the crawl purpose, host level information on accessibility, examples, and more. Some of these aspects are also in Google’s server access laws, but getting an understanding of them is often hard. So, Google hopes this report will make it easier for sites of all sizes to get actionable insight into the habits of Googlebot.

Together with this tool, Google also launched a new guide specifically for large websites and crawling. Usually, as a site grows, crawling can become harder. So, Google compiled the best practices to keep in mind, and this managing guide is not limited to those who run large sites because it is also useful for sites of all sizes.

Furthermore, Google started crawling with HTTP/2, which is the updated version of the protocol used to access web pages. It has some improvements that are particularly relevant for browsers, and Google has been using it to improve our normal crawling, too. With these changes, Google has sent its message to websites that they are crawling with HTTP/2 and also plan to add more over time if things go well. Google also promised to come back in the near future with more news on something as foundational as crawling.

Indexing Updates

As already stated, indexing is the process of understanding and storing web pages’ content so that Google can show them in the search results appropriately. There are two major update releases for indexing in this year’s January episode of the Google Search News. They include:

The Indexing Request Feature Is Back

The indexing request tool in the URL is back in the Google search console. That means that, once again, website owners and publishers can manually submit individual pages to request indexing if they run into a situation or if that is useful. For the most part, Google stated that sites should not need these systems and instead focus on providing good internal linking and site functions.  Overall, if a site does those well, then Google systems would be able to quickly and automatically crawl and index content from the website.

Index Coverage Report Has Been Updated in The Google Search Console

The index coverage report of the search console has also been updated significantly. With this change, Google hopes to help site owners be better informed on issues that affect their site content indexing. For instance, Google has removed the so much generic crawl anomaly issue types and replaced them with the more specific error types.

Links

First and foremost, Google uses links to find new pages and to understand their context on the web better. And of course, aside from the links, we all know that Google uses a lot of different factors in search results, but links are an integral part of the web. So, it is reasonable that sites think about links.

Google guidelines mentioned various things to avoid with regards to links, such as buying them. And Google indicated that they often get questions about what sites can do to attract links. Google stated that although creating great content isn’t always easy, it can help site owners reach broader audiences and maybe get a link or two.

The Bottom Line

Optimising websites without the knowledge of how search engines function is akin to publishing a great book without learning how to write the book in the first place. Google crawls and indexes web pages before it can showcase them appropriately in the search results.  Time and again, Google releases its updates on key features and tools that make it easier to crawl and index web pages. These tools help site owners and publishers alike to understand what is required for their sites’ content to show up in search results.