Larry Price

And The Endless Cup Of Coffee

Notes From All Things Open 2018

| Comments

This is a brief overview of the talks I sat through while attending All Things Open 2018 in Raleigh, NC.

Day 1

First day was okay, but I had trouble finding sessions that interested me and weren’t geared towards introductory use.

Keynotes

The first keynote, “The Next Billion Internet Users” by Angela Oduor Lungati, described the rapid rise in internet users in Africa and Asia. Her team made their app mobile-first, as many users only have access to the internet on a smart device. This allowed the app to be used in many different and unexpected situations. Increased connectivity also allows more people to participate in the world of software. According to a recent GitHub survey, Asia is opening the largest number of repos on the site.

Burr Sutter of Red Hat talked about Istio, Red Hat’s “service mesh” system. It’s a pretty neat way to manage services with k8s and OpenShift. Users can launch multiple service containers with different features and seamlessly direct traffic to these containers based on certain rules. Users could even direct traffic to a new and old version of a container to determine how a new version interacts with a production environment, with end-users only ever interacting with the old version.

“The Next Big Wave” (Zaheda Bhorat) mostly focused on how to create a welcoming open-source project that’s easy to contribute to, especially in a rapidly more connected world. As usual, READMEs and CONTRIBUTING docs are king, as well as good tutorials, wikis, and getting started guides.

In “Design in Open Source”, Una Kravets discusssed how Design Thinking can benefit open-source projects. Unfortunately, it’s really difficult to get designers to participate.

Track Sessions

“Turning ‘Wat’ into ‘Why’” (Katie McLaughlin) brought up a few idiosyncrasies from many different languages and discussed why the language behaves in that manner. No blame; just curiosity.

“Why Modern Apps Need a New Application Server” (Nick Shadrin) was an overview of the new Nginx Unit project, iterating on nginx with a focus on microservice architectures. This system actually launches applications, and several libraries/packages/modules are available for things like NodeJS and Go to enable this functionality. Configuration of any language was nearly identical, and defining the number of running instances was really easy through JSON endpoints. Auto-scaling was also included out-of-the-box.

“Open Data: Notes from the Field” was a panel discussion on how the Research Triangle uses citizens' data to make decisions. Much of the data used is decided upon on a municipal level as opposed to federal or state.

“Using Open Source for Large Scale Simulation in Robotic and Autonomous Driving Applications” (Nate Koenig) was largely a discussion about tools used to simulate robots. Obviously, testing robots in real life can be dangerous and expensive, so advanced simulation technology is crucial to iterating fast on this kind of hardware.

“React Already Did That” (Dylan Schiemann) hit on how React has evolved our ecosystem; components and functional programming will leave a permanent mark on JS development. Although React may not be around in 5 years, it is highly likely that the popular frameworks at that time will be fairly similar (think: Vue, Ionic, Svelte). This talk sort of devolved into a discussion of the speaker’s “competing” technology Dojo, which was somewhat of a precursor to React. It also uses TypeScript, which reminds me a lot of the tech stack we use at Granular.

“You XSS Your Life! How do we keep failing at security on the web?” (David Rogers) was an overview of how easy it is to fall for cross-site scripting attacks in modern web applications. Malicious user input could take down our system or reveal user data, so we should be scrubbing data anywhere it gets entered. Lots of tools available. Although this is touched upon a lot, I know that I’m guilty of just taking user input and using it unthinkingly.

Day 2

I found more relevant sessions to go to during the second day, which surprised me as normally the “last” day of a conference is worse than the first.

Keynotes

“Five Things You Didn’t Know Python Can Do!” by Nina Zakharenko went over things I already knew Python could do. Python runs important code in all industries, and has found itself indispensable in the world of science.

Babel developer Henry Zhu gave a talk titled “Maintainer’s Virtual” describing the world of full-time open-source development. Zhu left his job and works on Babel based on donations from the community. He talked about the guilt associated with taking breaks when people are donating their money to you, and how that easily leads to burnout. He talked about trying not to put too much pressure on yourself to be constantly contributing.

The final keynote, “Money as an Open Protocol” by Andreas Antonopoulos, was… interesting. A dash of conspiracy theory and anarchism made this talk a little uncomfortable. Big banks are not our friends, and this speaker was adamant that we would see the fall of centralized banking in the next 20 years. Bitcoin and friends are the predecessor to a new global digital currency. The choice we’ll be facing soon is whether we have a decentralized open currency akin to Bitcoin as our primary form of money or something more insidious such as “Facebook Coin”, “Google Coin”, “Apple Coin”, or “America Coin.” A fun quote from this talk was “The opposite of authority is autonomy.” Also “If money is the root of all evil, then sudo evil.” Although this talk was captivating, it felt like a pitch for a dystopian novel. Crowd ate it up.

Track Sessions

Kyle Simpson’s “Keep Betting on JavaScript” was probably my favorite session. Kyle gave a brief history of JavaScript, from its creation through its stagnation to the rapidly-evolving language we all know and love today. JavaScript’s failures to change in the 00s was largely due to a lack of unity in the community, ultimately leading to a spec that was thrown away. Other languages began to appear that looked like they would leave JS in the dust. Just as JS was on the brink of death, the community united, new features were specced out, and JS rose from the ashes. Many people still hate on JavaScript, and this is largely due to the fact that they are “emotionally attached to the idea that JavaScript sucks.” JavaScript is lingua franca in programming; it’s readable by developers of many languages, and ideas can easily be expressed. Kyle was very much into progressive web applications, with native apps becoming an unnecessary part of the ecosystem. Every app should have at least one ServiceWorker to guarantee that a tab will continue to exist, even after we get on an airplane. “TypeScript is a really intelligent linter”, Kyle says, but aside from that, can begin to confuse the world we live in if we use too many extended features. Transforming our code with all of these tools can make debugging harder, and can make it difficult for other developers to figure out what we’ve done using “View Source.” “View Source” is the ultimate tool in a new developer’s toolkit, allowing them to see how a site works and helping them develop new ideas. Kyle was weary of many of the new JavaScript features that are machine-centric; code features that will only be used by libraries and generators and never by an everyday programmer. Kyle insists that we should focus on developers first. Even WebAssembly and simimlar ideas are going to make web development a more complicated landscape to enter. Kyle started early and ended late. Further reading: Alan Kay, Douglas Englebart, Tom Dale.

“Cross-Platform Desktop Apps with Electron” (David Neal) was an introductory guide to using Electron, the cross-platform desktop UI technology behind Atom, VSCode, and the Slack desktop app. Starting in Electron seems easier than I expected. Architecture is similar to developing for the web, where we have server-side code and client-side code. It’s better to make calls to the server than to run on the UI thread. Pretty much anything that you can install with npm can readily be used in Electron, including UI frameworks such as React and testing tools such as mocha.

I watched some lightning talks during lunch. Raspberry Pi celebrates their 6th year, something something Blockchain databases, jump-starting an open-source career via blogging or speaking, examples of unconscious bias in AI datasets, all the wrong ways to pronounce “kubectl”, more on Red Hat’s Istio service mesh framework, and ideas for replacing docker with other container tools.

“Framework Free – Building a Single Page Application Without a JS Framework” (Ryan Miller) described the way we used to make websites in 2013 without frameworks, but with all the nice HTML5 features. It’s somewhat important to know how all of these things work under the covers, especially if you have to debug in the browser. It’s not always necessary to have a big, hefty framework. I was somewhat horrified by the number of people in the audience who didn’t know what jQuery was.

In “Intro to SVG”, Tanner Hodges explained the basics of SVGs, when to use them, and when to seek alternatives. Interesting cases included textured content (which rendered significantly smaller as a PNG over an SVG), content that included text (which needed to be checked to verify that the text was rendered as native SVG elements), and photography (which, when rendered to SVG, literally included a data hash of the original image at high resolution, creating a massive file). He also touted the importance of new standards such as webp and webv when displaying content on the web.

In “WTH is JWT”, Joel Lord broke down how JWTs are constructed by combining the encryption method, the payload (basic user information), and a secret key. Although the key can be deserialized and parsed to get to the information, the secret at the end is determined by hashing the key and the payload, so would-be attackers cannot simply change the JWT to gain access to the system without knowing the secret key. Of course, more security measures are necessary to keep intruders from gaining access, such as encrypted sessions and auth servers. The JWT Spec is still in the proposal period, so its definition may still change before finalization.

Conclusion

Pretty good conference. I made a few interesting connections. I learned that a lot of people are in love with Vue right now, TypeScript is still popular, and microservices are the only way to build an application. My biggest complaint was that coffee was hard to come by.

How to Use getDerivedStateFromProps in React 16.3+

| Comments

From a blog post in late March 2018, it was announced that the React lifecycle methods componentWillReceiveProps, componentWillMount, and componentWillUpdate will be deprecated in a future version of React. This is because of the eventual migration of React to async rendering; these lifecycle methods will become unreliable when async rendering is made default.

In place of these methods, the new static method getDerivedStateFromProps was introduced. My team and I struggled at first in wrapping our heads around how to migrate our many uses of componentWillReceiveProps to this new method. It’s generally easier than you think, but you need to keep in mind that the new method is static, and therefore does not have access to the this context that the old lifecycle methods provided.

getDerivedStateFromProps is invoked every time a component is rendered. It takes in two arguments: the next props object (which may be the same as the previous object) and the previous state object of the component in question. When implementing this method, we need to return the changes to our component state or null (or {}) if no changes need to be made.

componentWillReceiveProps

Here’s a pattern we were using in many components throughout our codebase:

1
2
3
4
5
componentWillReceiveProps(nextProps) {
  if (nextProps.selectedTab !== this.state.selectedTab) {
    this.setState(() => { return {selectedTab: nextProps.selectedTab} })
  }
}

This lifecycle method fired when we were about to receive new props in our component, passing in the new value as the first argument. We needed to check whether the new props indicated a change in the state of our tab bar, which we stored in state. This is one of the simplest patterns to address with getDerivedStateFromProps:

1
2
3
4
5
static getDerivedStateFromProps(nextProps, prevState) {
  return nextProps.selectedTab === prevState.selectedTab
    ? {}
    : {selectedTab: nextProps.selectedTab}
}

This code works in exactly the same way, but, since it’s static, we no longer use the context provided by this. Instead, we return any state changes. In this case, I’ve returned an empty object ({}) to indicate no state change when the tabs are identical; otherwise, I return an object with the new selectedTab value.

Sometimes you may have to perform some operations on the new props, but then you can still just compare the result to your previous state to figure out if anything changed. There may be other areas where you need to store some extra state duplicating your old props to make this work, but that may also be an indication that you need to use an alternative method.

componentWillMount

We also needed to replace calls to componentWillMount. I found that these calls were usually directly replaceable by componentDidMount, which will allow your component to perform an initial render and then execute blocking tasks. This may also require adding some loading-style capacity to your component, but will be better than a hanging app.

Here’s an example of a componentWillMount we had originally that blocked render until after an API call was made:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
componentWillMount() {
  this.setState(() => {
    return {
      loading: 'Loading tool info'
    }
  })
  return getTool(this.props.match.params.id).then((res) => {
    this.setState(() => {
      return {
        tool: res,
        loading: null
      }
    })
  }).catch((err) => {
    api.errors.put(err)
    this.setState(() => {
      return {
        loading: null
      }
    })
  })
}

Afterwards, I changed the state to show the component as loading on initial render and replaced the componentWillMount with componentDidMount:

1
2
3
4
5
6
7
8
9
10
11
12
13
state = {
  tool: null,
  loading: 'Loading tool info'
}

componentDidMount() {
  return getTool(this.props.match.params.id).then((res) => {
    this.setState(() => { return {tool: res, loading: null} })
  }).catch((err) => {
    api.errors.put(err)
    this.setState(() => { return {loading: null} })
  })
}

componentWillUpdate

Very similar to the methods discussed above, componentWillUpdate is invoked when a component is about to receive new props and the render method is definitely going to be called. Here’s an example of something we were doing previously:

1
2
3
4
5
componentWillUpdate(nextProps) {
  if (!nextProps.user.isLogged && !nextProps.user.authenticating) {
    this.context.router.history.push('/')
  }
}

And, replacing that usage with componentDidUpdate:

1
2
3
4
5
componentDidUpdate(/*prevProps, prevState*/) {
  if (!this.props.user.isLogged && !this.props.user.authenticating) {
    this.context.router.history.push('/')
  }
}

componentDidUpdate is similar to componentDidMount except that is caused after a change in state or props occurs instead of just on initial mount. As opposed to getDerivedStateFromProps, we have access to the context provided by this. Note that this method also has arguments for prevProps and prevState, which provides the previous versions of the component’s props and state for comparison to the current values.

Conclusion

The deprecation of these lifecycle methods won’t happen until React 17, but it’s always good to plan ahead. Many of the ways my team was using these deprecated methods could be considered an anti-pattern, and I suspect that your team may be in the same predicament.

Quick Start to Vendor Go Dependencies With Govendor

| Comments

I recently spent a few days adapting my Go for Web Development video series into a text-based course. In doing so, I had the chance to investigate some of the new vendoring tools available in Go. As of Go 1.5, “vendoring” dependencies has become the norm. Vendoring means tracking your dependencies and their versions and including those dependencies as part of your project.

In particular, I explored the uses of the govendor package, mostly because it’s supported by default by Heroku. The docs on the GitHub are a lot more thorough than what I’ll go over here.

govendor is easily installed within the go ecosystem. Assuming that $GOPATH/bin is in your path:

1
2
3
$ go get -u github.com/kardianos/govendor
$ which govendor
/home/lrp/go/bin/govendor

Now we just initialize the govendor directory and start installing dependencies. The govendor fetch command is pretty much all you’ll need:

1
2
3
$ govendor init
$ govendor fetch github.com/jinzhu/gorm
$ govendor fetch golang.org/x/crypto/bcrypt

init will create a vendor directory in your project path. Go will check this directory for any packages as though they were in your $GOPATH/src directory. The fetch calls will add new packages or update the given package in your vendor directory; in this case, I’ve fetched the latest versions of gorm and bcrypt.

This might seem painful, but the thing to do next is to commit everything in the vendor directory to your repository. Now you have it forever! This means that anyone who wants to run this version of your code in the future doesn’t have to worry about dependency versions and can instantly run your package with a valid go install.

If you don’t want to add all these packages to your repository, I don’t blame you. You can get around this by committing just your vendor/vendor.json file and then using govendor sync to install the missing packages after downloading your source code. This should be familiar to anyone who’s used bundler in ruby, virtualenv in python, or npm in Node.JS. If you’re using git, you’ll want a .gitignore with the following:

1
2
vendor/*
!vendor/vendor.json

This will ignore everything in vendor/ except for the vendor.json file which lists all your packages and their corresponding versions. Now, to install any packages from vendor.json that you don’t already have in your vendor directory:

1
$ govendor sync

govendor is a pretty powerful tool for vendoring your go dependencies and getting your application Heroku-ready, and I recommend checking out the docs for a more advanced overview. There are also many other vendoring options available, including an official go vendoring tool called dep that works with go 1.9+. dep will most definitely play a big role in refining the ideas that these third-party tools have created and the go ecosystem will become more stable.

Redirecting to Your Main Site With Heroku

| Comments

We have a lot of domains that we want to redirect to the same server, but we use a DNS service that does not allow doing a domain forward, and we’re not allowed to upgrade. I wanted to do this in the simplest way possible, so I created a workaround using a PHP script and Heroku. The source discussed in detail in this post is available on GitHub: https://github.com/larryprice/simple-heroku-redirect-app.

The goal here is for users to visit a page and then be immediately redirected to the new site. I’ve defined two environment variables to be used in this project: SITENAME, a human-readable name for our website, and SITEURL, the full URL that we actually want the user to end up on. I’ve defined a PHP file called index.php:

index.php
1
2
3
4
5
6
7
8
9
10
<!DOCTYPE html>
<html>
  <head>
    <title><?php echo getenv('SITENAME') ?> - You will be redirected shortly...</title>
    <meta http-equiv="refresh" content="0;URL='<?php echo getenv('SITEURL') ?>'" />
  </head>
  <body>
    <p>Please visit the official <?php echo getenv('SITENAME') ?> site at <a href="<?php echo getenv('SITEURL') ?>"><?php echo getenv('SITEURL') ?></a>.</p>
  </body>
</html>

The important piece here is the <meta> tag, which actually does the redirect for us. The only PHP code here are echo getenv commands that render our environment variables in the template. Since I’m a PHP novice, there may be a better way to do this, but the echo works just fine.

We also need to tell Apache how to serve the application. We want to match any routes and render our index.php. So we create an .htcaccess file:

.htaccess
1
2
RewriteEngine on
RewriteRule . index.php [L]

To satisfy Heroku, we need to list the dependencies for our PHP application. Fortunately for us, we don’t have any dependencies that Heroku does not provide by default. We’ll just create a composer.json file in the root of our project with an empty object:

composer.json
1
{}

That’s everything we need. You could recreate the project, but you could also just pull down the project listed above and push it up to Heroku:

1
2
3
4
$ git clone https://github.com/larryprice/simple-heroku-redirect-app.git
$ cd simple-heroku-redirect-app
$ heroku create
$ git push heroku master

With your application available on Heroku, we still need to set the environment variables described earlier as config variables:

1
2
$ heroku config:set SITENAME=yourgooddomain.com
$ heroku config:set "SITEURL=Your Good Domain's Website Name"

Now tell Heroku all the domains that will be accessing this application. These are the domains you want users not to use:

1
2
$ heroku domains:add yourbaddomain.com
$ heroku domains:add www.yourbaddomain.com

Now you just need to add the records indicated by the above command to your DNS records. These will probably be CNAME records pointing from @ to yourbaddomain.com.herokudns.com or www to yourbaddomain.com.herokudns.com.

Async and Await - a New Promise

| Comments

In my last post, I discussed the ES2015 concept of a Promise. A Promise provides a simplified mechanism for performing asynchronous work in JavaScript without using the classic setTimeout-callback approach. Seeing as it’s been about 4 months since my previous post, a new asynchronous concept is on the rise as part of the ES2017 specification: async and await.

I became aware of async and await after reading David Walsh’s blog, at which point I disregarded the new features as being “too soon” and “not different enough” from a Promise to warrant a second thought. Then, yesterday, I used them, and my life was, once again, forever changed.

await is used to essentially wait for a Promise to finish. Instead of using a callback with a then clause, await allows you to perform the action and/or store the result like you’re within a synchronous function.

async is a keyword identifier used on functions to specify that that function will use await. Try to call await in a function not labeled as async and you’re going to have a bad time. Any async function returns a Promise.

Let’s see an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
function getFirstName() { return Promise.resolve('Charles'); }
function getMiddleName() { return Promise.resolve('Entertainment'); }
function getLastName() { return Promise.resolve('Cheese'); }

async function getName() {
  const first = await getFirstName();
  const middle = await getMiddleName();
  const last = await getLastName();

  return `${first} ${middle} ${last}`;
}

getName().then((name) => {
  console.log(name);
});

console.log('My next guest needs no introduction:');

// Result:
//   My next guest needs no introduction:
//   Charles Entertainment Cheese

We have three functions which each return a Promise, and an async function which calls those functions sequentially and uses the results to construct a string. We call the getName function (which is async and therefore returns a Promise) and log the results. Our last command logs a special message. Due to the asynchronous nature of the getName function, our special message is logged first, and then the result of getName.

This comes in handy when you’re depending on the results of a Promise to do some work or pass into another asynchronous call. But, in the case of our getName function above, we could be getting all three of the names at once. This calls for the brilliant Promise.all method, which can also be used with async. Let’s modify our sub-name functions to all use async and then fetch them all at once:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
async function getFirstName() { return 'Charles'; }
async function getMiddleName() { return 'Entertainment'; }
async function getLastName() { return 'Cheese'; }

async function getName() {
  const names = await Promise.all([getFirstName(), getMiddleName(), getLastName()]);

  return `${names[0]} ${names[1]} ${names[2]}`;
}

getName().then((name) => {
  console.log(name);
});

console.log('My next guest needs no introduction:');

// Result:
//   My next guest needs no introduction:
//   Charles Entertainment Cheese

Since an async function just returns a Promise, we can directly use (and even inter-mix) async functions inside Promise.all, and the results come back in an ordered array.

OK, what if we want to fire off some long-running task and do some other work in the meantime? We can defer our use of await until after we’ve performed all the intermediate work:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
async function getFirstName() { return 'Charles'; }
async function getMiddleName() { return 'Entertainment'; }
async function getLastName() { return 'Cheese'; }

async function getName() {
  const first  = getFirstName();  // first, middle, and last will all
  const middle = getMiddleName(); // be pending Promises at this
  const last   = getLastName();   // point, to be resolved in time

  const title = Math.random() > .5 ? 'Sr.' : 'Esq.';

  return `${await first} ${await middle} ${await last}, ${title}`;
}

getName().then((name) => {
  console.log(name);
});

console.log('My next guest needs no introduction:');

// Result will be quasi-random:
//   My next guest needs no introduction:
//   Charles Entertainment Cheese, (Esq.|Sr.)

This example reiterates that you can use async functions just like you would a Promise, but with the added benefit of using await to wait for the results when necessary.

I know what you’re thinking: “All these positives, Larry! Is there nothing negative about async/await?” As always, there are a couple of pitfalls to using these functions. The biggest nuisance for me is the loss of the catch block when converting from a Promise chain. In order to catch errors with async/await, you’ll have to go back to traditional try/catch statements:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
async function checkStatus() { throw 'The Cheese is displeased!'; }

async function checks() {
  try {
    await checkStatus();
    return 'No problems.';
  } catch (e) {
    return e;
  }
}

checks().then((status) => {
  console.log(status)
})
console.log('Current status:');

// Result will be quasi-random:
//   Current status:
//   The Cheese is displeased!

The only other real downside is that async and await may not be fully supported in your users' browsers or your version of Node.JS. There are plenty of ways to get around this with Babel and polyfills, but, to be honest, I dedicated a large chunk of time yesterday afternoon to upgrading all of our libraries and babel versions to get this to work properly everywhere. Your mileage may vary, and, if you’re reading this 6 months from when it was posted, I’m sure it will be available by default in any implementations of ECMAScript.