How to resize images for Open Graph and Twitter using sharp
When sharing content on social media platforms, it's essential to have visually appealing images that are properly sized. Let’s explore how we could automatically resize images for Open Graph and Twitter card previews. We’ll be using sharp - a powerful and fast tool that powers the Image component from Next.js.
The final result is going to look like this:
Initial setup
If you need help with setting up the project, I recommend that you follow this guide from Yarn documentation.
Create two directories for input and output files:
mkdir ./input
mkdir ./output
Also, let’s put two files: example-1.jpeg (3:4) and example-2.jpeg (wide) into the input directory:
Install sharp and tsx (we will need it to run the script) :
yarn add sharp tsx
The resizing script
Now, let’s create our script:
touch ./resize.js
First, we want to get the list of images that the script will be resizing and prepare the names for output files:
// resize.js
import fs from "fs";
import path from "path";
async function main() {
const srcDir = "./input";
const destDir = "./output";
const inputFileNames = await fs.promises.readdir(srcDir);
console.debug("inputFileNames", inputFileNames);
// =>
// inputFileNames [ 'example-1.jpeg', 'example-2.jpeg' ]
for (let i = 0; i < inputFileNames.length; i++) {
const fileFullName = inputFileNames[i];
const extension = path.extname(fileFullName);
const fileName = fileFullName.replace(extension, "");
const src = path.join(srcDir, fileFullName);
const destOpenGraph = path.join(destDir, `${fileName}.open-graph.webp`);
const destTwitter = path.join(destDir, `${fileName}.twitter.webp`);
console.debug({ src, destOpenGraph, destTwitter });
}
// =>
// {
// src: 'input/example-1.jpeg',
// destOpenGraph: 'output/example-1.open-graph.webp',
// destTwitter: 'output/example-1.twitter.webp'
// }
// {
// src: 'input/example-2.jpeg',
// destOpenGraph: 'output/example-2.open-graph.webp',
// destTwitter: 'output/example-2.twitter.webp'
// }
}
main().catch((err) => console.error(err));
Next, let’s write our image transformation function and call it two times: for Open Graph dimensions (1200×628) and for Twitter card dimensions (800×418). It will take the input path src, the output path dest, the width and height as arguments. Then it will read the input file, resize it and store the result as a webp file:
// resize.js
import fs from "fs";
import path from "path";
import sharp from "sharp";
async function main() {
const srcDir = "./input";
const destDir = "./output";
const inputFileNames = await fs.promises.readdir(srcDir);
for (let i = 0; i < inputFileNames.length; i++) {
const fileFullName = inputFileNames[i];
const extension = path.extname(fileFullName);
const fileName = fileFullName.replace(extension, "");
const src = path.join(srcDir, fileFullName);
const destOpenGraph = path.join(destDir, `${fileName}.open-graph.webp`);
const destTwitter = path.join(destDir, `${fileName}.twitter.webp`);
console.debug({ src, destOpenGraph, destTwitter });
// Open Graph
await transform(src, destOpenGraph, 1200, 628);
// Twitter
await transform(src, destTwitter, 800, 418);
}
}
async function transform(src, dest, width, height) {
await sharp(src)
.resize({ width, height, fit: "cover" })
.webp({ quality: 80 })
.toFile(dest);
}
main().catch((err) => console.error(err));
Now we can run the script using tsx:
yarn tsx resize.js
This will produce the following results:
However, the images are zoomed in and cropped. This might or might not be what you want. With current examples, this looks okay, but if the picture was a portrait, the person could appear partially out of the frame.
Zoomed-in and blurred background
If you’re not comfortable with that, let’s make sure that the aspect ratio is preserved and the rest of the image is taken by blurred and zoomed-in version of the same image. The function will be doing two resizes: one for the background (with blur), and another for the foreground, and combining these two layers to create the final image. Adjust the transform function as follows:
// resize.js
import fs from "fs";
import path from "path";
import sharp from "sharp";
async function main() { /* ... */ }
async function transform(src, dest, width, height) {
const metadata = await sharp(src).metadata();
// Calculate the source and the target aspect ratio
const srcAspectRatio = metadata.width / metadata.height;
const destAspectRatio = width / height;
// Resize the image so that it covers the target dimensions, apply blur and
// store the result in memory
const backgroundBuffer = await sharp(src)
.resize({ width, height, fit: "cover" })
.blur(10)
.toBuffer();
// Resize the image so that it's contained within the target dimensions and
// store the result in memory
const foregroundBuffer = await sharp(src)
.resize(srcAspectRatio > destAspectRatio ? { width } : { height })
.toBuffer();
// Combine the background and the foreground and store the result in a file
await sharp(backgroundBuffer)
.composite([{ input: foregroundBuffer, gravity: "center" }])
.webp({ quality: 80 })
.toFile(dest);
}
main().catch((err) => console.error(err));
The resulting images will look like this:
Now you can use this technique to preprocess images during the build pipeline of your application.
Feedback
You can find the source code in this GitHub repository. If you have any feedback, please feel free to submit an Issue.