Larry Price

And The Endless Cup Of Coffee

How to Use getDerivedStateFromProps in React 16.3+

| Comments

From a blog post in late March 2018, it was announced that the React lifecycle methods componentWillReceiveProps, componentWillMount, and componentWillUpdate will be deprecated in a future version of React. This is because of the eventual migration of React to async rendering; these lifecycle methods will become unreliable when async rendering is made default.

In place of these methods, the new static method getDerivedStateFromProps was introduced. My team and I struggled at first in wrapping our heads around how to migrate our many uses of componentWillReceiveProps to this new method. It’s generally easier than you think, but you need to keep in mind that the new method is static, and therefore does not have access to the this context that the old lifecycle methods provided.

getDerivedStateFromProps is invoked every time a component is rendered. It takes in two arguments: the next props object (which may be the same as the previous object) and the previous state object of the component in question. When implementing this method, we need to return the changes to our component state or null (or {}) if no changes need to be made.

componentWillReceiveProps

Here’s a pattern we were using in many components throughout our codebase:

1
2
3
4
5
componentWillReceiveProps(nextProps) {
  if (nextProps.selectedTab !== this.state.selectedTab) {
    this.setState(() => { return {selectedTab: nextProps.selectedTab} })
  }
}

This lifecycle method fired when we were about to receive new props in our component, passing in the new value as the first argument. We needed to check whether the new props indicated a change in the state of our tab bar, which we stored in state. This is one of the simplest patterns to address with getDerivedStateFromProps:

1
2
3
4
5
static getDerivedStateFromProps(nextProps, prevState) {
  return nextProps.selectedTab === prevState.selectedTab
    ? {}
    : {selectedTab: nextProps.selectedTab}
}

This code works in exactly the same way, but, since it’s static, we no longer use the context provided by this. Instead, we return any state changes. In this case, I’ve returned an empty object ({}) to indicate no state change when the tabs are identical; otherwise, I return an object with the new selectedTab value.

Sometimes you may have to perform some operations on the new props, but then you can still just compare the result to your previous state to figure out if anything changed. There may be other areas where you need to store some extra state duplicating your old props to make this work, but that may also be an indication that you need to use an alternative method.

componentWillMount

We also needed to replace calls to componentWillMount. I found that these calls were usually directly replaceable by componentDidMount, which will allow your component to perform an initial render and then execute blocking tasks. This may also require adding some loading-style capacity to your component, but will be better than a hanging app.

Here’s an example of a componentWillMount we had originally that blocked render until after an API call was made:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
componentWillMount() {
  this.setState(() => {
    return {
      loading: 'Loading tool info'
    }
  })
  return getTool(this.props.match.params.id).then((res) => {
    this.setState(() => {
      return {
        tool: res,
        loading: null
      }
    })
  }).catch((err) => {
    api.errors.put(err)
    this.setState(() => {
      return {
        loading: null
      }
    })
  })
}

Afterwards, I changed the state to show the component as loading on initial render and replaced the componentWillMount with componentDidMount:

1
2
3
4
5
6
7
8
9
10
11
12
13
state = {
  tool: null,
  loading: 'Loading tool info'
}

componentDidMount() {
  return getTool(this.props.match.params.id).then((res) => {
    this.setState(() => { return {tool: res, loading: null} })
  }).catch((err) => {
    api.errors.put(err)
    this.setState(() => { return {loading: null} })
  })
}

componentWillUpdate

Very similar to the methods discussed above, componentWillUpdate is invoked when a component is about to receive new props and the render method is definitely going to be called. Here’s an example of something we were doing previously:

1
2
3
4
5
componentWillUpdate(nextProps) {
  if (!nextProps.user.isLogged && !nextProps.user.authenticating) {
    this.context.router.history.push('/')
  }
}

And, replacing that usage with componentDidUpdate:

1
2
3
4
5
componentDidUpdate(/*prevProps, prevState*/) {
  if (!this.props.user.isLogged && !this.props.user.authenticating) {
    this.context.router.history.push('/')
  }
}

componentDidUpdate is similar to componentDidMount except that is caused after a change in state or props occurs instead of just on initial mount. As opposed to getDerivedStateFromProps, we have access to the context provided by this. Note that this method also has arguments for prevProps and prevState, which provides the previous versions of the component’s props and state for comparison to the current values.

Conclusion

The deprecation of these lifecycle methods won’t happen until React 17, but it’s always good to plan ahead. Many of the ways my team was using these deprecated methods could be considered an anti-pattern, and I suspect that your team may be in the same predicament.

Quick Start to Vendor Go Dependencies With Govendor

| Comments

I recently spent a few days adapting my Go for Web Development video series into a text-based course. In doing so, I had the chance to investigate some of the new vendoring tools available in Go. As of Go 1.5, “vendoring” dependencies has become the norm. Vendoring means tracking your dependencies and their versions and including those dependencies as part of your project.

In particular, I explored the uses of the govendor package, mostly because it’s supported by default by Heroku. The docs on the GitHub are a lot more thorough than what I’ll go over here.

govendor is easily installed within the go ecosystem. Assuming that $GOPATH/bin is in your path:

1
2
3
$ go get -u github.com/kardianos/govendor
$ which govendor
/home/lrp/go/bin/govendor

Now we just initialize the govendor directory and start installing dependencies. The govendor fetch command is pretty much all you’ll need:

1
2
3
$ govendor init
$ govendor fetch github.com/jinzhu/gorm
$ govendor fetch golang.org/x/crypto/bcrypt

init will create a vendor directory in your project path. Go will check this directory for any packages as though they were in your $GOPATH/src directory. The fetch calls will add new packages or update the given package in your vendor directory; in this case, I’ve fetched the latest versions of gorm and bcrypt.

This might seem painful, but the thing to do next is to commit everything in the vendor directory to your repository. Now you have it forever! This means that anyone who wants to run this version of your code in the future doesn’t have to worry about dependency versions and can instantly run your package with a valid go install.

If you don’t want to add all these packages to your repository, I don’t blame you. You can get around this by committing just your vendor/vendor.json file and then using govendor sync to install the missing packages after downloading your source code. This should be familiar to anyone who’s used bundler in ruby, virtualenv in python, or npm in Node.JS. If you’re using git, you’ll want a .gitignore with the following:

1
2
vendor/*
!vendor/vendor.json

This will ignore everything in vendor/ except for the vendor.json file which lists all your packages and their corresponding versions. Now, to install any packages from vendor.json that you don’t already have in your vendor directory:

1
$ govendor sync

govendor is a pretty powerful tool for vendoring your go dependencies and getting your application Heroku-ready, and I recommend checking out the docs for a more advanced overview. There are also many other vendoring options available, including an official go vendoring tool called dep that works with go 1.9+. dep will most definitely play a big role in refining the ideas that these third-party tools have created and the go ecosystem will become more stable.

Redirecting to Your Main Site With Heroku

| Comments

We have a lot of domains that we want to redirect to the same server, but we use a DNS service that does not allow doing a domain forward, and we’re not allowed to upgrade. I wanted to do this in the simplest way possible, so I created a workaround using a PHP script and Heroku. The source discussed in detail in this post is available on GitHub: https://github.com/larryprice/simple-heroku-redirect-app.

The goal here is for users to visit a page and then be immediately redirected to the new site. I’ve defined two environment variables to be used in this project: SITENAME, a human-readable name for our website, and SITEURL, the full URL that we actually want the user to end up on. I’ve defined a PHP file called index.php:

index.php
1
2
3
4
5
6
7
8
9
10
<!DOCTYPE html>
<html>
  <head>
    <title><?php echo getenv('SITENAME') ?> - You will be redirected shortly...</title>
    <meta http-equiv="refresh" content="0;URL='<?php echo getenv('SITEURL') ?>'" />
  </head>
  <body>
    <p>Please visit the official <?php echo getenv('SITENAME') ?> site at <a href="<?php echo getenv('SITEURL') ?>"><?php echo getenv('SITEURL') ?></a>.</p>
  </body>
</html>

The important piece here is the <meta> tag, which actually does the redirect for us. The only PHP code here are echo getenv commands that render our environment variables in the template. Since I’m a PHP novice, there may be a better way to do this, but the echo works just fine.

We also need to tell Apache how to serve the application. We want to match any routes and render our index.php. So we create an .htcaccess file:

.htaccess
1
2
RewriteEngine on
RewriteRule . index.php [L]

To satisfy Heroku, we need to list the dependencies for our PHP application. Fortunately for us, we don’t have any dependencies that Heroku does not provide by default. We’ll just create a composer.json file in the root of our project with an empty object:

composer.json
1
{}

That’s everything we need. You could recreate the project, but you could also just pull down the project listed above and push it up to Heroku:

1
2
3
4
$ git clone https://github.com/larryprice/simple-heroku-redirect-app.git
$ cd simple-heroku-redirect-app
$ heroku create
$ git push heroku master

With your application available on Heroku, we still need to set the environment variables described earlier as config variables:

1
2
$ heroku config:set SITENAME=yourgooddomain.com
$ heroku config:set "SITEURL=Your Good Domain's Website Name"

Now tell Heroku all the domains that will be accessing this application. These are the domains you want users not to use:

1
2
$ heroku domains:add yourbaddomain.com
$ heroku domains:add www.yourbaddomain.com

Now you just need to add the records indicated by the above command to your DNS records. These will probably be CNAME records pointing from @ to yourbaddomain.com.herokudns.com or www to yourbaddomain.com.herokudns.com.

Async and Await - a New Promise

| Comments

In my last post, I discussed the ES2015 concept of a Promise. A Promise provides a simplified mechanism for performing asynchronous work in JavaScript without using the classic setTimeout-callback approach. Seeing as it’s been about 4 months since my previous post, a new asynchronous concept is on the rise as part of the ES2017 specification: async and await.

I became aware of async and await after reading David Walsh’s blog, at which point I disregarded the new features as being “too soon” and “not different enough” from a Promise to warrant a second thought. Then, yesterday, I used them, and my life was, once again, forever changed.

await is used to essentially wait for a Promise to finish. Instead of using a callback with a then clause, await allows you to perform the action and/or store the result like you’re within a synchronous function.

async is a keyword identifier used on functions to specify that that function will use await. Try to call await in a function not labeled as async and you’re going to have a bad time. Any async function returns a Promise.

Let’s see an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
function getFirstName() { return Promise.resolve('Charles'); }
function getMiddleName() { return Promise.resolve('Entertainment'); }
function getLastName() { return Promise.resolve('Cheese'); }

async function getName() {
  const first = await getFirstName();
  const middle = await getMiddleName();
  const last = await getLastName();

  return `${first} ${middle} ${last}`;
}

getName().then((name) => {
  console.log(name);
});

console.log('My next guest needs no introduction:');

// Result:
//   My next guest needs no introduction:
//   Charles Entertainment Cheese

We have three functions which each return a Promise, and an async function which calls those functions sequentially and uses the results to construct a string. We call the getName function (which is async and therefore returns a Promise) and log the results. Our last command logs a special message. Due to the asynchronous nature of the getName function, our special message is logged first, and then the result of getName.

This comes in handy when you’re depending on the results of a Promise to do some work or pass into another asynchronous call. But, in the case of our getName function above, we could be getting all three of the names at once. This calls for the brilliant Promise.all method, which can also be used with async. Let’s modify our sub-name functions to all use async and then fetch them all at once:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
async function getFirstName() { return 'Charles'; }
async function getMiddleName() { return 'Entertainment'; }
async function getLastName() { return 'Cheese'; }

async function getName() {
  const names = await Promise.all([getFirstName(), getMiddleName(), getLastName()]);

  return `${names[0]} ${names[1]} ${names[2]}`;
}

getName().then((name) => {
  console.log(name);
});

console.log('My next guest needs no introduction:');

// Result:
//   My next guest needs no introduction:
//   Charles Entertainment Cheese

Since an async function just returns a Promise, we can directly use (and even inter-mix) async functions inside Promise.all, and the results come back in an ordered array.

OK, what if we want to fire off some long-running task and do some other work in the meantime? We can defer our use of await until after we’ve performed all the intermediate work:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
async function getFirstName() { return 'Charles'; }
async function getMiddleName() { return 'Entertainment'; }
async function getLastName() { return 'Cheese'; }

async function getName() {
  const first  = getFirstName();  // first, middle, and last will all
  const middle = getMiddleName(); // be pending Promises at this
  const last   = getLastName();   // point, to be resolved in time

  const title = Math.random() > .5 ? 'Sr.' : 'Esq.';

  return `${await first} ${await middle} ${await last}, ${title}`;
}

getName().then((name) => {
  console.log(name);
});

console.log('My next guest needs no introduction:');

// Result will be quasi-random:
//   My next guest needs no introduction:
//   Charles Entertainment Cheese, (Esq.|Sr.)

This example reiterates that you can use async functions just like you would a Promise, but with the added benefit of using await to wait for the results when necessary.

I know what you’re thinking: “All these positives, Larry! Is there nothing negative about async/await?” As always, there are a couple of pitfalls to using these functions. The biggest nuisance for me is the loss of the catch block when converting from a Promise chain. In order to catch errors with async/await, you’ll have to go back to traditional try/catch statements:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
async function checkStatus() { throw 'The Cheese is displeased!'; }

async function checks() {
  try {
    await checkStatus();
    return 'No problems.';
  } catch (e) {
    return e;
  }
}

checks().then((status) => {
  console.log(status)
})
console.log('Current status:');

// Result will be quasi-random:
//   Current status:
//   The Cheese is displeased!

The only other real downside is that async and await may not be fully supported in your users' browsers or your version of Node.JS. There are plenty of ways to get around this with Babel and polyfills, but, to be honest, I dedicated a large chunk of time yesterday afternoon to upgrading all of our libraries and babel versions to get this to work properly everywhere. Your mileage may vary, and, if you’re reading this 6 months from when it was posted, I’m sure it will be available by default in any implementations of ECMAScript.

Promise You'll Call Back: A Guide to the Javascript Promise Class

| Comments

This article introduces the Javascript Promise class, and how to use a Promise to perform asynchronous work. At the end of this post, you’ll have been exposed to the most important components of the Promise API.

Introduced in the ES2015 specification, MDN dryly describes a Promise as:

The Promise object represents the eventual completion (or failure) of an asynchronous operation, and its resulting value.

But… what exactly does that entail? How does it differ from just using callbacks?

Let’s start with a simple example. If I want to perform an operation asynchronously, traditionally I would use setTimeout to do work after the main thread has finished and use a callback parameter to let the caller utilize the results. For example:

1
2
3
4
5
6
7
8
9
10
11
12
13
const someAsyncTask = (after) => {
  return setTimeout(() => {
    after('the task is happening');
  }, 0);
};

console.log('before calling someAsyncTask');

someAsyncTask((result) => {
  console.log(result);
});

console.log('after calling someAsyncTask');

Try running this yourself with node, and you’ll see that ‘before…’ and ‘after…’ are printed followed by ‘the task is happening’.

This is perfectly valid code, but it’s just so unnatural to handle asynchronous tasks this way. There’s no standard to which parameter should be the callback, and there’s no standard to what arguments will be passed back to a given callback. Let’s take a look at the same situation using the new Promise class:

1
2
3
4
5
6
7
8
9
10
11
const someAsyncTask = () => {
  return Promise.resolve('the task is happening');
};

console.log('before calling someAsyncTask');

someAsyncTask().then((result) => {
  console.log(result);
});

console.log('after calling someAsyncTask');

Let’s walk through this. In someAsyncTask, we’re now returning a call to Promise.resolve with our result. We call then on the result of someAsyncTask and then handle the results. Promise.resolve is returning a resolved Promise, which is run asynchronously after the main thread finishes its initial work (the final console.log, in this case).

Immediately, this feels a lot cleaner to me, but this is a really simple example.

Think about a situation where you need to perform multiple asynchronous callbacks that each depend on the results of the last callback. Here’s an example implementation using callbacks;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
const getFirstName = (callback) => {
  return setTimeout(() => {
    callback('Harry');
  }, 0);
};

const getLastName = (callback) => {
  return setTimeout(() => {
    callback('Potter');
  }, 0);
};

const concatName = (first, last, callback) => {
  return setTimeout(() => {
    callback(`${first} ${last}`);
  }, 0);
}

getFirstName((first) => {
  getLastName((last) => {
    concatName(first, last, (fullname) => {
      console.log(fullname);
    });
  });
});

I think we can all agree that this is not friendly code. What makes a Promise truly special is its natural chainability. As long as we keep returning Promise objects, we can keep calling then on the results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
const getFirstName = (callback) => {
  return Promise.resolve('Harry');
};

const getLastName = (callback) => {
  return Promise.resolve('Potter');
};

const concatName = (first, last, callback) => {
  return Promise.resolve(`${first} ${last}`);
}

getFirstName().then((first) => {
  return getLastName().then((last) => {
    return concatName(first, last);
  });
}).then((fullname) => {
  console.log(fullname);
});

Since concatName is dependent on the result of both getFirstName and getLastName, we still do a little bit of nesting. However, our final asynchronous action can now occur on the outside of the nesting, which will take advantage of the last returned result of our Promise resolutions.

Error handling is another can of worms in callbacks. Which return value is the error and which is the result? Every level of nesting in a callback has to either handle errors, or maybe the top-most callback has to contain a try-catch block. Here’s a particularly nasty example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
const getFirstName = (callback) => {
  return setTimeout(() => {
    if (Math.random() > 0.5) return callback('Sorry, firstName errored');
    callback(null, 'Harry');
  }, 0);
};

const getLastName = (callback) => {
  return setTimeout(() => {
    if (Math.random() > 0.5) return callback('Sorry, lastName errored');
    callback(null, 'Potter');
  }, 0);
};

const concatName = (first, last, callback) => {
  return setTimeout(() => {
    if (Math.random() > 0.5) return callback('Sorry, fullName errored');
    callback(null, `${first} ${last}`);
  }, 0);
}

getFirstName((err, first) => {
  if (err) console.error(err); // no return, will fall through despite error
  getLastName((err, last) => {
    if (err) return console.error(err);
    concatName(first, last, (err, fullname) => {
      if (err) return console.error(err);
      console.log(fullname);
    });
  });
});

Every callback has to check for an individual error, and if any level mishandles the error (note the lack of a return on error after getFirstName), you’re guaranteed to end up with undefined behavior. A Promise allows us to handle errors at any level with a catch statement:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
const getFirstName = () => {
  if (Math.random() > 0.5) return Promise.reject('Sorry, firstName errored');
  return Promise.resolve('Harry');
};

const getLastName = () => {
  if (Math.random() > 0.5) return Promise.reject('Sorry, lastName errored');
  return Promise.resolve('Potter');
};

const concatName = (first, last) => {
  if (Math.random() > 0.5) return Promise.reject('Sorry, fullName errored');
  return Promise.resolve(`${first} ${last}`);
}

getFirstName().then((first) => {
  return getLastName().then((last) => {
    return concatName(first, last);
  });
}).then((fullname) => {
  console.log(fullname);
}).catch((err) => {
  console.error(err);
});

We return the result of Promise.reject to signify that we have an error. We only need to call catch once. Any then statements from unresolved promises will be ignored. A catch could be inserted at any nesting point, which could give you the ability to continue the chain:

1
2
3
4
5
6
7
8
9
10
11
12
13
// ...

getFirstName().then((first) => {
  return getLastName().then((last) => {
    return concatName(first, last);
  }).catch((err) => {
    return concatName(first, 'Houdini');
  });
}).then((fullname) => {
  console.log(fullname);
}).catch((err) => {
  console.error(err);
});

So far, we’ve been returning Promise objects using resolve and reject, but there’s also the ability to define our own Promise objects with their own resolve and reject methods. Updating the getFirstName variable:

1
2
3
4
5
6
7
const getFirstName = () => {
  return new Promise((resolve, reject) => {
    if (Math.random() > 0.5) return reject('Sorry, firstName errored');
    return resolve('Harry');
  });
}
// ...

We can also run our asynchronous tasks without nesting by using the Promise.all method:

1
2
3
4
5
6
7
8
9
// ...

Promise.all([getFirstName(), getLastName()]).then((names) => {
  return concatName(names[0], names[1]);
}).then((fullname) => {
  console.log(fullname);
}).catch((err) => {
  console.error(err);
});

Give Promise.all a list of promises and it will call them (in some order) and return all the results in an array (in the order given) as a resolved Promise once all given promises have been resolved. If any of the promises are rejected, the entire Promise will be rejected, resulting in the catch statement.

Sometimes you need to run several methods, and you only care about the first result. Promise.race is similar to Promise.all, but only waits for one of the given promises to return:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
const func1 = () => {
  return new Promise((resolve, reject) => {
    setTimeout(() => resolve('func1'), 5*Math.random());
  });
}

const func2 = () => {
  return new Promise((resolve, reject) => {
    setTimeout(() => resolve('func2'), Math.random());
  });
}

Promise.race([func1(), func2()]).then((name) => {
  console.log(name);
}).catch((err) => {
  console.error(err);
});

Sometimes, ‘func1’ will be printed, but most of the time ‘func2’ will be printed.

…And that’s the basics! Hopefully, you have a better understanding of how a Promise works and the advantages provided over traditional callback architectures. More and more libraries are depending on the Promise class, and you can really clean up your logic by using those methods. As Javascript continues to evolve, hopefully we find ourselves getting more of these well-designed systems to make writing code more pleasant.