Over the history of the Internet, the single Page Speed metric evolved into various parameters that influence user experience. These metrics are commonly referred to as Core Web Vitals. Together, they paint a comprehensive picture of the performance of your site from an end user's perspective. These metrics are considered by Google and other search engines when assigning SEO scores.
Knowing what these metrics are, and what they are intended to measure, is an important part of creating a performance profile for your site. This knowledge can also help you find solutions for common performance issues.
In this tutorial, we'll:
- Define the metrics that make up the Core Web Vitals
- Point to additional resources where you can learn more about each metric
By the end of this tutorial, you should know which performance metrics are considered Core Web Vitals and what aspects of site performance they cover.
New Relic is a monitoring service that provides insights into your application stack from front-end performance to the server and infrastructure metrics. New Relic uses a combination of aggregating server logs, and pre-built (or custom) monitors to track the metrics that are most important to your application. The collected data can be organized into custom dashboards, and alerts can be set up and issued per customizable conditions.
In this tutorial, we'll:
- Learn about different New Relic modules and their purpose
- Review some default dashboard components and reports
- Discuss how to use the information in New Relic to understand the health of your Drupal application
By the end of this tutorial, you should understand the basics of using New Relic and the insights it offers to monitor and improve the performance of your Drupal site.
Performance profiling allows you to see an overview of how your Drupal application stacks up against your users' needs and business requirements. A good profile will help you understand where the performance bottlenecks are and where you should focus your efforts in order to achieve the best results when optimizing your application.
There are many profiling tools available to help you analyze your Drupal site's performance. Some are free -- like the browser’s built-in development tools, the Lighthouse Chrome extension, and XHProf. Some are paid -- like New Relic, Blackfire, and other profiling SaaS solutions.
In this tutorial we'll:
- Outline the general concepts and goals of performance profiling
- List some available profiling tools and their features
By the end of this tutorial you should be able to describe what performance profiling is, and list the tools commonly used to establish a performance profile for a Drupal site.
When your site is experiencing performance issues, one way to pinpoint the cause is to use profiling tools. Before you can fix the issue you have to be able to identify what's causing it. All profiling tools do roughly the same thing: they tell you what code is called during the request and how much time is spent executing it. This helps to identify the slowest code and dig deeper into the cause. Once the cause is determined you can start figuring out how to optimize the code.
For this tutorial, we’ll use New Relic as a profiling tool, but you can apply a similar methodology using the profiling tool of your choice.
In this tutorial, we'll:
- Learn how to identify and analyze slow transactions
- Look at common things to check for while profiling
- Cover some questions you should ask when looking at profiling data to help track down the slow code
By the end of this tutorial, you should know how to profile a Drupal site (specifically with New Relic) to find performance bottlenecks.
Drupal site performance relies heavily on caching. Optimal caching (and invalidation) requires that each page is rendered with the correct cacheable metadata. This metadata allows for intelligent caching -- but when something isn't working correctly, it can be tricky to figure out where exactly the metadata was generated from.
When debugging Drupal cache issues, you're usually trying to answer 1 of 2 primary questions:
- Why is this cached? If the information gets stale, why isn’t it updated?
- Why is this not cached? And why is our cache hit rate low?
The Drupal cache system consists of many layers, each of which may contribute to the problem. This tutorial focuses on debugging the Drupal application cache layer, and strategies for debugging Varnish. Given that most external to Drupal layers rely on the use of HTTP headers for caching, you should be able to use similar techniques to those used for debugging Varnish.
In this tutorial, we'll:
- Learn strategies for debugging the Drupal application cache and render cache
- Share strategies for debugging low hit rates when using Varnish
By the end of this tutorial, you should know how to enable and use various cache debugging mechanisms in Drupal to help identify problems in your site performance and resolve them.
Server scaling is the process of adding more resources (CPU, memory, disk space, etc.) to a server (or servers) to help with performance. This might be a single server, or a cluster of different machines. When we talk about server scaling, think more about the resources and less about the specific hardware. Modern servers may not always resemble a physical machine that you can open up and insert additional RAM into. But the theory is the same: more memory means the server can handle more concurrent requests.
Sometimes your Drupal site is optimized, but traffic is still high and takes most of the server’s resources. In order to sustain that load you'll need to scale your server up.
Sometimes you don’t have the resources or expertise to implement caching optimizations, refactoring code, and modifying slow queries -- all of which would improve Drupal's performance. In these cases, you may consider server scaling.
Server scaling can be done in two ways: horizontally or vertically.
In this tutorial we'll:
- Learn what server scaling is
- Discuss examples of both horizontal and vertical server scaling
- Talk about when to choose horizontal versus vertical scaling strategies
By the end of this tutorial, you should understand the concept of server scaling and how it applies to a Drupal application.
No one likes to wait for a slow site to load. Not me, not you, and definitely not search engines. And the effect of site load times on things like SEO, user bounce rates, purchase intent, and overall satisfaction are only going to become more pronounced over time.
Drupal is a modern web framework that is capable of serving millions of users. But every site is unique, and while Drupal tries hard to be fast out of the box, you'll need to develop a performance profile, caching strategy, and scaling plan that are specific to your use case in order to be truly blazing fast.
Drupal site performance depends on multiple components, from hardware setup and caching system configuration to contributed modules, front-end page weight, and CDNs. Experienced Drupal developers looking to optimize their applications know where to start looking for potential savings. They can manipulate settings and combinations of these components to achieve the desired results. Our goal with this set of tutorials is to help explain the process and provide you with the insight that comes with experience.
In this tutorial we'll:
- Introduce high-level performance concepts for Drupal that we'll then cover in more detail elsewhere
- Provide an overview of the main Drupal performance components.
By the end of this tutorial, you should understand what components around your Drupal application are responsible for site performance.
Upgrade to Drupal 11
FreeThere’s no one-size-fits-all path to upgrade from Drupal 10 to Drupal 11, but there is a set of common tasks that everyone will need to complete.
In this tutorial we’ll:
- Explain the differences between Drupal 10 and Drupal 11 that affect the upgrade path.
- Walk through the high-level steps required to upgrade from Drupal 10 to Drupal 11.
- Provide resources to help you create an upgrade checklist and start checking items off the list.
By the end of this tutorial you should be able to:
- Explain the major differences between Drupal 10 and 11.
- Audit your existing Drupal 10 projects for Drupal 11 readiness, and estimate the level of effort involved.
- Start the process of upgrading your site from Drupal 10 to Drupal 11.
Managing a Drupal application with Composer requires a few modifications to Composer's default behavior. For instance, Drupal expects that specialized packages called "modules" be downloaded to modules/contrib rather than Composer's default vendor directory.
Additionally, it is common practice in the Drupal community to modify contributed projects with patches from Drupal.org. How do we incorporate Drupal-specific practices like these into a Composer workflow?
In this tutorial we will:
- Address all of the Drupal-specific configuration necessary to manage a Drupal application using Composer
By the end of this tutorial you should know how to configure Composer to work with Drupal, and drupal.org.
When managing your Drupal project with Composer you'll use Composer commands to download (require) modules and themes that you want to install, as well as issuing commands to keep those modules and themes up-to-date when new versions are released.
In this tutorial we'll:
- Cover step-by-step instructions for performing common Composer tasks for a Drupal application
- Install and update Drupal projects (core, modules, themes, profiles, etc.) using Composer
- Convert an existing application to use Composer
By the end of this tutorial you should know how to use Composer to install, and update, Drupal modules and themes.
The Migrate module itself contains some excellent examples of data migrations implementing the various APIs provided by the module and serves as the canonical documentation for how to write a migration. In this lesson we'll take a look at the beer and wine import examples provided with the migrate module as a way to familiarize ourselves with the concepts discussed earlier and to be able to see the code that makes up a basic migration before attempting to write our own. In practice these examples serve as a great starting point and can often times be copy/pasted and adjusted for your own needs.
Additional resources
Before we can write our own custom migration we need to construct the site that we're going to import data into and of course we need some source data to import. In this lesson we'll obtain some source data to work with and configure our Drupal site by installing a couple contributed modules, and creating the content types and fields necessary for our information architecture. During this process we'll also be taking a look at the baseball player and team data that we'll be importing and familiarizing ourselves a bit more with the tables and columns in our source database.
Note: The Lahman database structure has changed since this video was recorded, and the latest files provided by Lahman don't match what is used in the video. When following along with these examples you'll either need to use the 2012 source data, or make a few adjustments to field/table names throughout the source code as you follow along so that they match the current structure of the Lahman data.
Additional resources
Running your own migration requires a bit of setup and boilerplate code, Drupal needs to know where to find the source data, and the Migrate module needs to be provided with some basic information about our custom code. In this lesson we'll look at setting up Drupal to be able to connect to multiple databases, and then create the skeleton of a simple module which will house our custom migration code and an implementation of hook_migrate_api(). Finally we'll create a base class from which we can begin writing our own custom migrations, and talk about why this is a good way to start organizing our migration.
In this lesson we're going to write our first custom data migration and start importing some of the player data in to Drupal. The primary concepts covered in this lesson are the creation of a migration class following the pattern necessary for Migrate module to be able to discover our code and then dealing with defining source and destination objects so that the Migrate module will know where to read data from and where to write data to. Finally we'll add a simple single field mapping where we map the player's name to the node.title field allowing us to run the migration for the first time and import some real data.
Additional resources
In this lesson we'll continue to add field mappings to the basic migration class created in the previous lesson. We'll see how to add more information about the available source fields. We'll map more of the player source data fields to their equivalent destination fields and learn about some of the many ways that fields can be mapped. Finally we'll also cover mapping sub-fields which allows us to import data for things like the text format of a node's body field or the alt column in an image field. Information that's contained within a meta field.
Additional resources
In this lesson we're going to finish mapping the fields for the player migration and learn about how to deal with source data that requires some additional massaging before being saved to the destination. We'll learn about the use of field mapping callback functions and the migration's prepareRow method as possible spots to perform additional logic during a migration. Then we'll use these techniques to combine our player first and last name fields together for the node title field, deal with our birth and death date fields by concatenating the three source columns together into a single date string, and finally add some additional information to the notes field during import that will allow us to track imported records in the future.
Note: This lesson was recorded using the 7.x-2.6 version of the date module, however the 7.x-2.7 version is now out which contains some changes to the module's integration with the migrate module. The biggest change being that the date_migrate module is no longer required and has been deprecated. You can read more about the changes here: https://drupal.org/node/2034231
Additional resources
Although not required when writing your own migration, the Migrate module provides ways for us to decorate our migrations with additional information making it easier to keep track of who is working on what, what needs to get done, and related issues. We've already seen some of this in previous lessons with with the Do Not Migrate option for fields and ability to provide field mapping descriptions. In this lesson we'll take a look at how we can make use of the additional tools to do things like; Show the name, contact info, and roles of individuals working on the migration, pose questions to other team members via the migration UI, and link individual field mappings to related tickets in our bug tracking software. Making it easier for team members who don't want to be involved with the code to help move our migration along.
Additional resources
In this lesson we're going to take a look at creating relationships between two Drupal nodes during a migration. In our case we've got player and team nodes, and each player node has an entity reference to a team node which we need to populate during our migration. In order for this to work we need to ensure that the team node has already been created so that we know the unique node.nid to use in the entity reference field for the player.
To accomplish this we're going to write a migration for team data and ensure that it is run prior to our player migration being run. Then we're going to make use of the mapping between source and destination rows that the Migrate module is tracking for teams so that during a player migration we can lookup the corresponding team node's nid and make use of it.
This lesson is a short one but it covers an important topic, multi-value fields. Almost any field in Drupal can be configured to support more than one value being entered for a single field. Our teams entityreference field is a good example of this, a player could have played for one or more team over the course of their career. This lesson will look at two different ways to map multiple values for a single field.
First we'll look at doing it in a callback method where we perform an additional query and then use the values returned by that query. And second we'll look at using the field mapping's separator method to take a column in a source row that has multiple values separated by a comma and import them as individual field values.
The infamous causality dilemma of the chicken and the egg examines which of the two came first constantly battling with the fact that you need one to produce the other. It's a vicious circle. In this lesson we're going to explore this dilemma in the context of data migrations. Imagine a scenario where you've got an article node type that has a reference field for similar articles which you need to populate with the node ID of the similar articles. During a migration when the article is being imported the article that is being referenced may or may not exist already. If it doesn't exist already how do we know what ID we need to put into the reference field?
One option would be to solve this problem using multiple passes. A first pass that goes through and creates all the articles, and a second that comes back through and updates the similar articles field. Though what happens if the similar articles field is required? You wouldn't be able to save the article without a value in that field the first time around? So you see how this quickly becomes another example of the chicken or the egg problem?
Lucky for us the migrate module has a solution to this called stub migrations. A process that allows creating a stub or a shell for the referenced but not yet created article so that we can use it's unique ID, then when that article is encountered in the migration it will update the stub rather than create a new article.