All code from this post can be found in a codepen collection.
A functional component (also sometimes referred to as a “stateless” component) is a method of defining React components with only a render method. These components still take in readonly props and return some JSX, but until now have had no means to perform any stateful logic. The following is a simple functional component that creates a button for canceling a user’s account:
1 2 3 4 5 6 |
|
The component takes in a single prop onClick
that is called when the button is clicked.
Let’s say that we want to add some stateful logic to that component. Marketing has started complaining that users clicking our current “Cancel Account” button contribute to a loss of revenue, and we need to slow that loss down to appease investors this quarter. We get design involved and decide to prompt the user several times to confirm their cancellation. We’ll need to keep track of the number of clicks and the current prompt in state.
Here’s how we might do that on February 5th, 2019, using class components:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
Wow! We’ve nearly tripled the size of our original component here. In this stateful world, we needed to extend the React.Component
class, define a constructor to set our initial state, update our state when the button is clicked, and add the componentDidUpdate
lifecycle method. The componentDidUpdate
method is called on every re-render, so we first check to see if the number of clicks
changed before taking any action. If it did, we check to see if we have more messages than clicks and update the prompt text; otherwise, we call the original onClick
function from our props and, unfortunately for our sales goals, churn another user.
This is a lot of boilerplate and has a tendency to get complex really fast. If only there was another way!
“Well, actually, Papa Larry,” I hear you interjecting from behind your monitor, “we could do this without a lifecycle method and only one piece of state.” My dear friend. Yes, this code is slightly contrived so that I can show you all the main features of hooks with a fairly straightforward example. Just keep your susurruses to yourself until after the show.
This is where Hooks come into play. Let’s fast-forward from early evening in the American Midwest on February 5th, 2019, to late evening, when suddenly React 16.8 was released and it was officially titled “The One With Hooks.”
Let’s take our original functional component and add state with Hooks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Our Hooks implementation is about half as long as our class implementation. I would argue that it’s also significantly easier to read. Let’s break this down bit-by-bit to discuss each piece of the hooks API:
1 2 |
|
At the top of our function, we call the useState
method to declare two state variables: clicks
and buttonText
. useState
takes in an initial value and returns a state variable and setter method, which we access locally using array destructuring. In this case, we set the initial state of clicks
to 0
and leave buttonText
empty.
Behind-the-scenes, React is using our component’s scope to create and track these state variables. We must always define these variables in the same order when this function is executed, or we’ll get our variables all mixed up and our logic won’t make any sense.
1 2 3 4 5 6 |
|
The useEffect
method is essentially a replacement for the componentDidMount
and componentDidUpdate
lifecycle methods. It takes in a function that will be called after every render. Here we take advantage of closures to test the value of our clicks
state variable and use setButtonText
to update our buttonText
state variable. The second argument to useEffect
is an array of state variables to check - if none of the given state variables were changed, the effect will be skipped.
We can call useEffect
as many times as we want in our component. This allows us to create a clear separation of concerns if we need to define several different effects.
1 2 3 4 5 6 7 |
|
This is our same old render logic, but in this case we’re using the setClicks
function returned to us by useState
.
Design and marketing like this concept of delaying an action and just changing the text so much that they want to use it all over the site. Now we have stateful logic that needs to be reused. This is where the concept of “Custom Hooks” comes in:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
Here I’ve created my own hook called useTextByCount
that abstracts away the entire concept of the buttonText
state variable. We can use this custom hook in any functional component. Abstracting stateful logic is a tall task in class components, but it’s completely natural using Hooks.
Hooks are the result of the React maintainers responding to the way React developers want to write code, enabling us to use powerful stateful concepts in a cleaner, functional system. This is a natural next step for the React API, but it’s not going to deprecate all your class components. Hooks are completely optional and backwards compatible with current React concepts, so there’s no need to make a Jira ticket to refactor all your components tomorrow morning. Hooks are here to help you write new components faster and better, giving you new options when you need to start adding state to that simple button component.
Check out the Hooks Guide and the Rules of Hooks for more information.
Happy hooking!
]]>First day was okay, but I had trouble finding sessions that interested me and weren’t geared towards introductory use.
The first keynote, “The Next Billion Internet Users” by Angela Oduor Lungati, described the rapid rise in internet users in Africa and Asia. Her team made their app mobile-first, as many users only have access to the internet on a smart device. This allowed the app to be used in many different and unexpected situations. Increased connectivity also allows more people to participate in the world of software. According to a recent GitHub survey, Asia is opening the largest number of repos on the site.
Burr Sutter of Red Hat talked about Istio, Red Hat’s “service mesh” system. It’s a pretty neat way to manage services with k8s and OpenShift. Users can launch multiple service containers with different features and seamlessly direct traffic to these containers based on certain rules. Users could even direct traffic to a new and old version of a container to determine how a new version interacts with a production environment, with end-users only ever interacting with the old version.
“The Next Big Wave” (Zaheda Bhorat) mostly focused on how to create a welcoming open-source project that’s easy to contribute to, especially in a rapidly more connected world. As usual, READMEs and CONTRIBUTING docs are king, as well as good tutorials, wikis, and getting started guides.
In “Design in Open Source”, Una Kravets discusssed how Design Thinking can benefit open-source projects. Unfortunately, it’s really difficult to get designers to participate.
“Turning ‘Wat’ into ‘Why’” (Katie McLaughlin) brought up a few idiosyncrasies from many different languages and discussed why the language behaves in that manner. No blame; just curiosity.
“Why Modern Apps Need a New Application Server” (Nick Shadrin) was an overview of the new Nginx Unit project, iterating on nginx
with a focus on microservice architectures. This system actually launches applications, and several libraries/packages/modules are available for things like NodeJS and Go to enable this functionality. Configuration of any language was nearly identical, and defining the number of running instances was really easy through JSON endpoints. Auto-scaling was also included out-of-the-box.
“Open Data: Notes from the Field” was a panel discussion on how the Research Triangle uses citizens' data to make decisions. Much of the data used is decided upon on a municipal level as opposed to federal or state.
“Using Open Source for Large Scale Simulation in Robotic and Autonomous Driving Applications” (Nate Koenig) was largely a discussion about tools used to simulate robots. Obviously, testing robots in real life can be dangerous and expensive, so advanced simulation technology is crucial to iterating fast on this kind of hardware.
“React Already Did That” (Dylan Schiemann) hit on how React has evolved our ecosystem; components and functional programming will leave a permanent mark on JS development. Although React may not be around in 5 years, it is highly likely that the popular frameworks at that time will be fairly similar (think: Vue, Ionic, Svelte). This talk sort of devolved into a discussion of the speaker’s “competing” technology Dojo, which was somewhat of a precursor to React. It also uses TypeScript, which reminds me a lot of the tech stack we use at Granular.
“You XSS Your Life! How do we keep failing at security on the web?” (David Rogers) was an overview of how easy it is to fall for cross-site scripting attacks in modern web applications. Malicious user input could take down our system or reveal user data, so we should be scrubbing data anywhere it gets entered. Lots of tools available. Although this is touched upon a lot, I know that I’m guilty of just taking user input and using it unthinkingly.
I found more relevant sessions to go to during the second day, which surprised me as normally the “last” day of a conference is worse than the first.
“Five Things You Didn’t Know Python Can Do!” by Nina Zakharenko went over things I already knew Python could do. Python runs important code in all industries, and has found itself indispensable in the world of science.
Babel developer Henry Zhu gave a talk titled “Maintainer’s Virtual” describing the world of full-time open-source development. Zhu left his job and works on Babel based on donations from the community. He talked about the guilt associated with taking breaks when people are donating their money to you, and how that easily leads to burnout. He talked about trying not to put too much pressure on yourself to be constantly contributing.
The final keynote, “Money as an Open Protocol” by Andreas Antonopoulos, was… interesting. A dash of conspiracy theory and anarchism made this talk a little uncomfortable. Big banks are not our friends, and this speaker was adamant that we would see the fall of centralized banking in the next 20 years. Bitcoin and friends are the predecessor to a new global digital currency. The choice we’ll be facing soon is whether we have a decentralized open currency akin to Bitcoin as our primary form of money or something more insidious such as “Facebook Coin”, “Google Coin”, “Apple Coin”, or “America Coin.” A fun quote from this talk was “The opposite of authority is autonomy.” Also “If money is the root of all evil, then sudo evil
.” Although this talk was captivating, it felt like a pitch for a dystopian novel. Crowd ate it up.
Kyle Simpson’s “Keep Betting on JavaScript” was probably my favorite session. Kyle gave a brief history of JavaScript, from its creation through its stagnation to the rapidly-evolving language we all know and love today. JavaScript’s failures to change in the 00s was largely due to a lack of unity in the community, ultimately leading to a spec that was thrown away. Other languages began to appear that looked like they would leave JS in the dust. Just as JS was on the brink of death, the community united, new features were specced out, and JS rose from the ashes. Many people still hate on JavaScript, and this is largely due to the fact that they are “emotionally attached to the idea that JavaScript sucks.” JavaScript is lingua franca in programming; it’s readable by developers of many languages, and ideas can easily be expressed. Kyle was very much into progressive web applications, with native apps becoming an unnecessary part of the ecosystem. Every app should have at least one ServiceWorker to guarantee that a tab will continue to exist, even after we get on an airplane. “TypeScript is a really intelligent linter”, Kyle says, but aside from that, can begin to confuse the world we live in if we use too many extended features. Transforming our code with all of these tools can make debugging harder, and can make it difficult for other developers to figure out what we’ve done using “View Source.” “View Source” is the ultimate tool in a new developer’s toolkit, allowing them to see how a site works and helping them develop new ideas. Kyle was weary of many of the new JavaScript features that are machine-centric; code features that will only be used by libraries and generators and never by an everyday programmer. Kyle insists that we should focus on developers first. Even WebAssembly and simimlar ideas are going to make web development a more complicated landscape to enter. Kyle started early and ended late. Further reading: Alan Kay, Douglas Englebart, Tom Dale.
“Cross-Platform Desktop Apps with Electron” (David Neal) was an introductory guide to using Electron, the cross-platform desktop UI technology behind Atom, VSCode, and the Slack desktop app. Starting in Electron seems easier than I expected. Architecture is similar to developing for the web, where we have server-side code and client-side code. It’s better to make calls to the server than to run on the UI thread. Pretty much anything that you can install with npm can readily be used in Electron, including UI frameworks such as React and testing tools such as mocha.
I watched some lightning talks during lunch. Raspberry Pi celebrates their 6th year, something something Blockchain databases, jump-starting an open-source career via blogging or speaking, examples of unconscious bias in AI datasets, all the wrong ways to pronounce “kubectl”, more on Red Hat’s Istio service mesh framework, and ideas for replacing docker
with other container tools.
“Framework Free – Building a Single Page Application Without a JS Framework” (Ryan Miller) described the way we used to make websites in 2013 without frameworks, but with all the nice HTML5 features. It’s somewhat important to know how all of these things work under the covers, especially if you have to debug in the browser. It’s not always necessary to have a big, hefty framework. I was somewhat horrified by the number of people in the audience who didn’t know what jQuery was.
In “Intro to SVG”, Tanner Hodges explained the basics of SVGs, when to use them, and when to seek alternatives. Interesting cases included textured content (which rendered significantly smaller as a PNG over an SVG), content that included text (which needed to be checked to verify that the text was rendered as native SVG elements), and photography (which, when rendered to SVG, literally included a data hash of the original image at high resolution, creating a massive file). He also touted the importance of new standards such as webp
and webv
when displaying content on the web.
In “WTH is JWT”, Joel Lord broke down how JWTs are constructed by combining the encryption method, the payload (basic user information), and a secret key. Although the key can be deserialized and parsed to get to the information, the secret at the end is determined by hashing the key and the payload, so would-be attackers cannot simply change the JWT to gain access to the system without knowing the secret key. Of course, more security measures are necessary to keep intruders from gaining access, such as encrypted sessions and auth servers. The JWT Spec is still in the proposal period, so its definition may still change before finalization.
Pretty good conference. I made a few interesting connections. I learned that a lot of people are in love with Vue right now, TypeScript is still popular, and microservices are the only way to build an application. My biggest complaint was that coffee was hard to come by.
]]>componentWillReceiveProps
, componentWillMount
, and componentWillUpdate
will be deprecated in a future version of React. This is because of the eventual migration of React to async rendering; these lifecycle methods will become unreliable when async rendering is made default.
In place of these methods, the new static method getDerivedStateFromProps
was introduced. My team and I struggled at first in wrapping our heads around how to migrate our many uses of componentWillReceiveProps
to this new method. It’s generally easier than you think, but you need to keep in mind that the new method is static, and therefore does not have access to the this
context that the old lifecycle methods provided.
getDerivedStateFromProps
is invoked every time a component is rendered. It takes in two arguments: the next props
object (which may be the same as the previous object) and the previous state
object of the component in question. When implementing this method, we need to return the changes to our component state
or null
(or {}
) if no changes need to be made.
Here’s a pattern we were using in many components throughout our codebase:
1 2 3 4 5 |
|
This lifecycle method fired when we were about to receive new props
in our component, passing in the new value as the first argument. We needed to check whether the new props
indicated a change in the state of our tab bar, which we stored in state
. This is one of the simplest patterns to address with getDerivedStateFromProps
:
1 2 3 4 5 |
|
This code works in exactly the same way, but, since it’s static, we no longer use the context provided by this
. Instead, we return any state changes. In this case, I’ve returned an empty object ({}
) to indicate no state change when the tabs are identical; otherwise, I return an object with the new selectedTab
value.
Sometimes you may have to perform some operations on the new props
, but then you can still just compare the result to your previous state to figure out if anything changed. There may be other areas where you need to store some extra state duplicating your old props
to make this work, but that may also be an indication that you need to use an alternative method.
We also needed to replace calls to componentWillMount
. I found that these calls were usually directly replaceable by componentDidMount
, which will allow your component to perform an initial render and then execute blocking tasks. This may also require adding some loading-style capacity to your component, but will be better than a hanging app.
Here’s an example of a componentWillMount
we had originally that blocked render until after an API call was made:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
Afterwards, I changed the state to show the component as loading on initial render and replaced the componentWillMount
with componentDidMount
:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Very similar to the methods discussed above, componentWillUpdate
is invoked when a component is about to receive new props and the render
method is definitely going to be called. Here’s an example of something we were doing previously:
1 2 3 4 5 |
|
And, replacing that usage with componentDidUpdate
:
1 2 3 4 5 |
|
componentDidUpdate
is similar to componentDidMount
except that is caused after a change in state or props occurs instead of just on initial mount. As opposed to getDerivedStateFromProps
, we have access to the context provided by this
. Note that this method also has arguments for prevProps
and prevState
, which provides the previous versions of the component’s props
and state
for comparison to the current values.
The deprecation of these lifecycle methods won’t happen until React 17, but it’s always good to plan ahead. Many of the ways my team was using these deprecated methods could be considered an anti-pattern, and I suspect that your team may be in the same predicament.
]]>In particular, I explored the uses of the govendor package, mostly because it’s supported by default by Heroku. The docs on the GitHub are a lot more thorough than what I’ll go over here.
govendor
is easily installed within the go ecosystem. Assuming that $GOPATH/bin
is in your path:
1 2 3 |
|
Now we just initialize the govendor
directory and start installing dependencies. The govendor fetch
command is pretty much all you’ll need:
1 2 3 |
|
init
will create a vendor
directory in your project path. Go will check this directory for any packages as though they were in your $GOPATH/src
directory. The fetch
calls will add new packages or update the given package in your vendor
directory; in this case, I’ve fetched the latest versions of gorm
and bcrypt
.
This might seem painful, but the thing to do next is to commit everything in the vendor directory to your repository. Now you have it forever! This means that anyone who wants to run this version of your code in the future doesn’t have to worry about dependency versions and can instantly run your package with a valid go install.
If you don’t want to add all these packages to your repository, I don’t blame you. You can get around this by committing just your vendor/vendor.json
file and then using govendor sync
to install the missing packages after downloading your source code. This should be familiar to anyone who’s used bundler
in ruby, virtualenv
in python, or npm
in Node.JS. If you’re using git, you’ll want a .gitignore
with the following:
1 2 |
|
This will ignore everything in vendor/
except for the vendor.json
file which lists all your packages and their corresponding versions. Now, to install any packages from vendor.json
that you don’t already have in your vendor
directory:
1
|
|
govendor
is a pretty powerful tool for vendoring your go dependencies and getting your application Heroku-ready, and I recommend checking out the docs for a more advanced overview. There are also many other vendoring options available, including an official go vendoring tool called dep that works with go 1.9+. dep
will most definitely play a big role in refining the ideas that these third-party tools have created and the go ecosystem will become more stable.
The goal here is for users to visit a page and then be immediately redirected to the new site. I’ve defined two environment variables to be used in this project: SITENAME
, a human-readable name for our website, and SITEURL
, the full URL that we actually want the user to end up on. I’ve defined a PHP file called index.php
:
1 2 3 4 5 6 7 8 9 10 |
|
The important piece here is the <meta>
tag, which actually does the redirect for us. The only PHP code here are echo getenv
commands that render our environment variables in the template. Since I’m a PHP novice, there may be a better way to do this, but the echo
works just fine.
We also need to tell Apache how to serve the application. We want to match any routes and render our index.php
. So we create an .htcaccess
file:
1 2 |
|
To satisfy Heroku, we need to list the dependencies for our PHP application. Fortunately for us, we don’t have any dependencies that Heroku does not provide by default. We’ll just create a composer.json
file in the root of our project with an empty object:
1
|
|
That’s everything we need. You could recreate the project, but you could also just pull down the project listed above and push it up to Heroku:
1 2 3 4 |
|
With your application available on Heroku, we still need to set the environment variables described earlier as config variables:
1 2 |
|
Now tell Heroku all the domains that will be accessing this application. These are the domains you want users not to use:
1 2 |
|
Now you just need to add the records indicated by the above command to your DNS records. These will probably be CNAME records pointing from @
to yourbaddomain.com.herokudns.com
or www
to yourbaddomain.com.herokudns.com
.
Promise
. A Promise
provides a simplified mechanism for performing asynchronous work in JavaScript without using the classic setTimeout
-callback approach. Seeing as it’s been about 4 months since my previous post, a new asynchronous concept is on the rise as part of the ES2017 specification: async
and await
.
I became aware of async
and await
after reading David Walsh’s blog, at which point I disregarded the new features as being “too soon” and “not different enough” from a Promise
to warrant a second thought. Then, yesterday, I used them, and my life was, once again, forever changed.
await
is used to essentially wait for a Promise
to finish. Instead of using a callback with a then
clause, await
allows you to perform the action and/or store the result like you’re within a synchronous function.
async
is a keyword identifier used on functions to specify that that function will use await
. Try to call await
in a function not labeled as async
and you’re going to have a bad time. Any async
function returns a Promise
.
Let’s see an example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
We have three functions which each return a Promise
, and an async
function which calls those functions sequentially and uses the results to construct a string. We call the getName
function (which is async
and therefore returns a Promise
) and log the results. Our last command logs a special message. Due to the asynchronous nature of the getName
function, our special message is logged first, and then the result of getName
.
This comes in handy when you’re depending on the results of a Promise
to do some work or pass into another asynchronous call. But, in the case of our getName
function above, we could be getting all three of the names at once. This calls for the brilliant Promise.all
method, which can also be used with async
. Let’s modify our sub-name functions to all use async
and then fetch them all at once:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Since an async
function just returns a Promise
, we can directly use (and even inter-mix) async
functions inside Promise.all
, and the results come back in an ordered array.
OK, what if we want to fire off some long-running task and do some other work in the meantime? We can defer our use of await
until after we’ve performed all the intermediate work:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
This example reiterates that you can use async
functions just like you would a Promise
, but with the added benefit of using await
to wait for the results when necessary.
I know what you’re thinking: “All these positives, Larry! Is there nothing negative about async
/await
?” As always, there are a couple of pitfalls to using these functions. The biggest nuisance for me is the loss of the catch
block when converting from a Promise
chain. In order to catch errors with async
/await
, you’ll have to go back to traditional try/catch
statements:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
The only other real downside is that async
and await
may not be fully supported in your users' browsers or your version of Node.JS. There are plenty of ways to get around this with Babel and polyfills, but, to be honest, I dedicated a large chunk of time yesterday afternoon to upgrading all of our libraries and babel versions to get this to work properly everywhere. Your mileage may vary, and, if you’re reading this 6 months from when it was posted, I’m sure it will be available by default in any implementations of ECMAScript.
Introduced in the ES2015 specification, MDN dryly describes a Promise
as:
The Promise object represents the eventual completion (or failure) of an asynchronous operation, and its resulting value.
But… what exactly does that entail? How does it differ from just using callbacks?
Let’s start with a simple example. If I want to perform an operation asynchronously, traditionally I would use setTimeout
to do work after the main thread has finished and use a callback parameter to let the caller utilize the results. For example:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Try running this yourself with node
, and you’ll see that ‘before…’ and ‘after…’ are printed followed by ‘the task is happening’.
This is perfectly valid code, but it’s just so unnatural to handle asynchronous tasks this way. There’s no standard to which parameter should be the callback, and there’s no standard to what arguments will be passed back to a given callback. Let’s take a look at the same situation using the new Promise
class:
1 2 3 4 5 6 7 8 9 10 11 |
|
Let’s walk through this. In someAsyncTask
, we’re now returning a call to Promise.resolve
with our result. We call then
on the result of someAsyncTask
and then handle the results. Promise.resolve
is returning a resolved Promise
, which is run asynchronously after the main thread finishes its initial work (the final console.log
, in this case).
Immediately, this feels a lot cleaner to me, but this is a really simple example.
Think about a situation where you need to perform multiple asynchronous callbacks that each depend on the results of the last callback. Here’s an example implementation using callbacks;
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
I think we can all agree that this is not friendly code. What makes a Promise
truly special is its natural chainability. As long as we keep returning Promise
objects, we can keep calling then
on the results:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Since concatName
is dependent on the result of both getFirstName
and getLastName
, we still do a little bit of nesting. However, our final asynchronous action can now occur on the outside of the nesting, which will take advantage of the last returned result of our Promise
resolutions.
Error handling is another can of worms in callbacks. Which return value is the error and which is the result? Every level of nesting in a callback has to either handle errors, or maybe the top-most callback has to contain a try-catch block. Here’s a particularly nasty example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
Every callback has to check for an individual error, and if any level mishandles the error (note the lack of a return on error after getFirstName
), you’re guaranteed to end up with undefined behavior. A Promise
allows us to handle errors at any level with a catch
statement:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
We return the result of Promise.reject
to signify that we have an error. We only need to call catch
once. Any then
statements from unresolved promises will be ignored. A catch
could be inserted at any nesting point, which could give you the ability to continue the chain:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
So far, we’ve been returning Promise
objects using resolve
and reject
, but there’s also the ability to define our own Promise
objects with their own resolve
and reject
methods. Updating the getFirstName
variable:
1 2 3 4 5 6 7 |
|
We can also run our asynchronous tasks without nesting by using the Promise.all
method:
1 2 3 4 5 6 7 8 9 |
|
Give Promise.all
a list of promises and it will call them (in some order) and return all the results in an array (in the order given) as a resolved Promise
once all given promises have been resolved. If any of the promises are rejected, the entire Promise
will be rejected, resulting in the catch
statement.
Sometimes you need to run several methods, and you only care about the first result. Promise.race
is similar to Promise.all
, but only waits for one of the given promises to return:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Sometimes, ‘func1’ will be printed, but most of the time ‘func2’ will be printed.
…And that’s the basics! Hopefully, you have a better understanding of how a Promise
works and the advantages provided over traditional callback architectures. More and more libraries are depending on the Promise
class, and you can really clean up your logic by using those methods. As Javascript continues to evolve, hopefully we find ourselves getting more of these well-designed systems to make writing code more pleasant.
Public speaking makes me nervous, and I’m not alone. A crowd of people is listening to your stutters, nit-picking your errors, and judging your clothing. No one is immune from the fear of public speaking. What can you do about it? Armed with Lara Hogan’s Demystifying Public Speaking, we can learn how to make public speaking a bit less stressful. There is no complete answer, but this book is full of tips and guidance for speaking engagements of any size and gravitas.
Need ideas for public speaking? Take advantage of the work you do every day. Prepare a presentation for the tough code you wrote last week, the library you found, the Agile processes you use, or how you set up your workstation, favorite tool, or cloud service.
Start small. Run the topic by your coworkers with a rough outline. Run it by your spouse to get an outside perspective. You can tweak your ideas based on the feedback, and then move on to bigger venues. Do a lunch and learn, a lightning talk, or a local meetup.
Your end goal does not have to be a conference. Conferences can be huge events with many attendees, and can be extremely daunting. Many people only go to conferences for the big names, and your talk might be more easily forgotten amongst all the ultra-hyped celebrity talks.
Then again, if that’s what you’re into, you could become the celebrity after doing a few conference talks. If you do well at one or two conferences, there’s a good chance you’ll start getting invited to more conferences. These conferences might want you to rehash your past talk (score! minimal effort!), give you a topic, or hand you the reigns to get creative.
Your audience wants you to do well. It’s a common misconception that your audience is rooting against you. They want to learn, and they want to believe that what you’re telling them is worthwhile. If you make a mistake, you don’t need to be embarrassed: everyone knows it’s hard to go on-stage in front of a group of people. Just try to correct yourself and move on.
Always include some levity in your presentation. A joke or a cat picture can help reengage an audience that may be succumbing to fatigue. Ask a silly or surprising question, maybe even going so far as to ask for some audience participation.
Presentations with lots of imagery are great, but your presentation style doesn’t have to follow any conventions. Some people are comfortable getting their cues from their notes and images, but others may prefer more traditional header and bullet point slides.
If there’s going to be a Q&A section, have your coworkers or peers hit you with some potential questions. Maybe you can beef up parts of your presentation that were misinterpreted or underrepresented.
It’s okay to say “I don’t know” during a Q&A session. You just laid down a lot of knowledge on your audience, but that doesn’t mean you have to know all the answers. Furthermore, if someone “stumps” you during a Q&A section, just admit it and move on. There’s always that one guy who asks a “question” that he already knows the answer to to make himself appear intelligent. Ignore that guy. He’s got issues. Just say “OK” and move on to the next question.
Do what’s comfortable for you. Read directly from your notes. Put comforting reminders in your slides, like pictures of your cats or Superman squashing fascism. Use “wizard hands” or other embarrassing hand gestures. Let your personality come out, or invent a completely separate stage persona to assume while you’re presenting. All that matters is that you accomplish your task of dropping some knowledge bombs on your intended audience.
Remember: you’re the expert on this topic. If you weren’t, you wouldn’t be able to put together that presentation to begin with. Your presentation was chosen because the organizer(s) had confidence in you, your ability, and your knowledge. The audience members are there because they find meaning in your topic and believe you’re the right person to transfer that information. You’re in control.
I really only have one action item from this book:
I’ve moved to a new city this summer, and I’m starting to actively seek out local meetup groups. My goal is to find the right opportunity and the courage to participate in some lightning talks or possibly longer presentations.
]]>Warning: This is going to come off as a diary entry about my life and is not intended as a free, highly technical post about programming. Come back later for more of that.
In mid-April, Canonical went through a major focus shift and with it a major round of layoffs. The company sought to become a more profitable business, and no longer believed the innovative work being done around Unity 8, Mir, and convergence could lead to profitability, and instead decided to focus on snaps and the cloud. Most of us working on the projects I mentioned got the axe, as well as others in various other departments.
To quote Vonnegut: “So it goes.”
I understand the shift from a business perspective, but have many opinions about the layoff process and the future of Ubuntu. Ticket price is currently set at two whiskey drinks. Ubuntu is shifting to Gnome 3 by default in October, making the line between Ubuntu and Fedora a bit cloudier. The open-source community lost a unique desktop environment, an interesting (if ambitious) vision for the future, and a large number of contributors putting in at least 40 hours a week.
In the meantime, my wife had set a date to leave her job and go back to graduate school in another state. We needed to find a new place to live in the next couple months and sell our home in Indiana. Needless to say, losing 2/3 of our income added a little bit of stress to the situation.
Ever since I got back from FOSDEM in February, I was slowly losing weight. No matter how much I ate, the pounds continued to drop. After the layoff, my health deteriorated quickly. Going to bed each night, I could hear my heart beating way too fast. I got tired walking up the stairs. It literally hurt to sit on my bony behind. With the layoff, my health insurance was also gone unless I wanted to pay $500+ each month to “take advantage” of COBRA.
So it goes.
Fortunately, I was able to find a new job fairly quickly. There are lots of companies looking for programmers right now, so I had my pick of interesting domains to apply to. I landed at a company called Illinois Rocstar (IR for short) as a Research Engineer, working on SBIR contracts for the Department of Energy and Department of Defense. The company is mostly non-traditional programmers, and I came in with a lot to offer as far as writing software, building applications, and designing processes. It’s a fairly unique experience to work in the world of science with co-workers who come from very different backgrounds. I work in an office again, but it’s an office of fewer than 20 people in the beautiful University of Illinois Research Park.
With new job on hand, I acquired new health insurance as quickly as possible and went to see a physician. The diagnosis: adult-onset diabetes. Over the past three months, I have made drastic changes to my diet and significantly improved my health. It’s amazing how carb-heavy the American diet is; added sugar pervades practically everything we eat. I currently make most of my own meals at home, and it has intensified my love of the culinary arts.
We found a new home in our new city (Champaign-Urbana, IL) quickly: it’s a 1913 arts & crafts style home with a tiny yard, big kitchen, and a basement. It’s walking distance to downtown, parks, and grocery stores. We adore this home and were really lucky to find it.
For a while, it seemed the pattern was all tragic events followed by a series of purely positive events. But the story is never quite that simple.
While my health has been slowly improving, my great-grandmother passed away naturally, and my parents revealed that they had been secretly dealing with some health issues of my father’s. Although I am getting better, I still have a ways to go before I could be described as fully healthy; but I’ve found a new physician in Champaign, and we’re working on it.
Although we found the new house in Champaign quickly, selling our home in Indiana is another story. With the housing market the way it is right now, every schmuck on the street was telling us we would sell our home the first weekend we listed. The reality is that it took about 80 days to find a buyer, and the closing date is 134 days after listing (still a couple weeks from the date I’m writing this post). This meant two mortgages for a couple months, which has forced us to stretch our dollar a bit. The good news, of course, is that I will no longer be a real estate mogul in a short couple of weeks.
Enough of my sob story.
I’m just about ready to start looking for opportunities to get involved in programming outside of the office again. I have at least one mobile application in mind, I could blog forever about all the React I’ve done and scientific tools I’ve learned to use, and I’ll be looking for locals to start sharing my experience with.
Time to get back in the game.
]]>I met my wife when I joined my high school robotics team. I was a hammer: with no singular skill that I wanted to focus on, I was thrown into painting, cutting, drilling, electrical, lifting, sanding, building the website, and even marketing. Funnily enough, I never actually worked on programming the robot. As far as I knew at the time, my wife was doing similar work with more focus on marketing and operating the robot.
Fast-forward 10 years. I do most of the indoor home improvement work (electrical, dry walling, painting), and our yard work is fairly balanced. A few weekends ago, we were swapping out the light fixtures in the master bathroom. Well, she was mostly holding the flashlight while I cursed at the previous owner’s hackwork. As we’re wrapping up the first fixture, she starts asking really basic electrical questions. Beginner’s questions.
This struck me as a little odd. I answer with a bit of sarcasm, mentioning that it’s the absolute basic stuff she should remember from being on the robotics team. As it turns out, the (all-male) mentor staff never really encouraged the girls to work directly on the robot. The girls were pushed toward things like “marketing” and operating the robot, presumably because it reflected well on the team to have female drivers/operators. I started to think back on our overlapping time on the team and… Come to think of it, the girls really were pushed into a less-technical experience while I was pushed towards the greasy work.
This was 10 years ago, so I would hope things are a bit different now as more girls are influenced to get interested in STEM in high school. It’s a shame that she wasn’t encouraged to take on the same tasks as the boys, and it’s left an obvious effect on her into adulthood.
So I started thinking about my parents. My father would include me as much as possible in fixing up the house. My sister, on the other hand, would generally not be included. Granted, she’s 4 years younger than me, but I was forced to help my dad do things from a very young age when I would have rather been playing video games than learning life lessons. My wife is the eldest of three daughters, and they also were never really invited to help with home improvement outside of yard work.
We may not have grown up in the most forward-thinking town, but our parents are not sexists. Our robotics mentors may have lived a bit in the past, but they intended no malice while directing students towards work. In both of these cases, the adults were following the same pattern they’ve seen for generations. The trick now is to break that pattern.
For the second light fixture in the master bathroom, I held the flashlight. I instructed my wife in removing and replacing the fixture, only intervening on stubborn bolts. After we finished, she thanked me. I felt a bit guilty for not letting her take the wheel for so long. My assumption had always been she had the skills but no desire to get her hands dirty. Turns out she only needed the opportunity to break the pattern.
]]>Last time we transformed our base synchronous D-Bus service to include asynchronous calls in a rather naive way. In this post, we’ll refactor those asynchronous calls to include D-Bus signals; codewise, we’ll pick up right where we left off after part 2: https://github.com/larryprice/python-dbus-blog-series/tree/part2. Of course, all of today’s code can be found in the same project with the part3 tag: https://github.com/larryprice/python-dbus-blog-series/tree/part3.
We can fire signals from within our D-Bus service to notify clients of tasks finishing, progress updates, or data availability. Clients subscribe to these signals and act accordingly. Let’s start by changing the signature of the slow_result
method of RandomData
to be a signal:
1 2 3 4 |
|
We’ve replaced the context decorator with a signal
, and we’ve swapped out the guts of this method for a pass
, meaning the method will call but doesn’t do anything else. We now need a way to call this signal, which we can do from the SlowThread
class we were using before. When creating a SlowThread
in the slow
method, we can pass in this signal as a callback. At the same time, we can remove the threads
list we used to use to keep track of existing SlowThread
objects.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Now we can make some updates to SlowThread
. The first thing we should do is add a new parameter callback
and store it on the object. Because slow_result
no longer checks the done
property, we can remove that and the finished
event. Instead of calling set
on the event, we can now simply call the callback
we stored with the current thread_id
and result
. We end up with a couple of unused variables here, so I’ve also gone ahead and refactored the work
method on SlowThread
to be a little cleaner.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
And that’s it for the service-side. Any callers will need to subscribe to our slow_result
method, call our slow
method, and wait for the result to come in.
We need to make some major changes to our client
program in order to receive signals. We’ll need to introduce a main loop, which we’ll spin up in a separate thread, for communicating on the bus. The way I like to do this is with a ContextManager so we can guarantee that the loop will be exited when the program exits. We’ll move the logic we previously used in client
to get the RandomData
object into a private member method called _setup_object
, which we’ll call on context entry after creating the loop. On context exit, we’ll simply call quit
on the loop.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
We can add methods on RandomDataClient
to encapsulate quick
and slow
. quick
is easy - we’ll just return self._random_data.quick(bits)
. slow
, on the other hand, will take a bit of effort. We’ll need to subscribe to the slow_result
signal, giving a callback for when the signal is received. Since we want to wait for the result here, we’ll create a threading.Event
object and wait
for it to be set
, which we’ll do in our handler. The handler, which we’ll call _finished
will validate that it has received the right result based on the current thread_id
and then set the result
on the RandomDataClient
object. After all this, we’ll remove the signal listener from our bus connection and return the final result.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
Now we’re ready to actually call these methods. We’ll wrap our old calling code with the RandomDataClient
context manager, and we’ll directly call the methods as we did before on the client:
1 2 3 4 5 6 7 8 |
|
This should have feature-parity with our part 2 code, but now we don’t have to deal with an infinite loop waiting for the service to return.
We have a working asynchronous D-Bus service using signals. Next time I’d like to dive into forwarding command output from a D-Bus service to a client.
As a reminder, the end result of our code in this post is MIT Licensed and can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part3.
]]>Last time we created a base for our asynchronous D-Bus service with a simple synchronous server/client. In this post, we’ll start from that base which can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part1. Of course, all of today’s code can be found in the same project with the part2 tag: https://github.com/larryprice/python-dbus-blog-series/tree/part2.
Before we dive into making our service asynchronous, we need a reason to make our service asynchronous. Currently, our only d-bus object contains a single method, quick
, which lives up to its namesake and is done very quickly. Let’s add another method to RandomData
which takes a while to finish its job.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
Note the addition of the slow
method on the RandomData
object. slow
is a contrived implementation of building an n-bit random number by concatenating 1s and 0s, sleeping for 1 second between each iteration. This will still go fairly quickly for a small number of bits, but could take quite some time for numbers as low as 16 bits.
In order to call the new method, we need to modify our client
binary. Let’s add in the argparse
module and take in a new argument: --slow
. Of course, --slow
will instruct the program to call slow
instead of quick
, which we’ll add to the bottom of the program.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
Now we can run our client
a few times to see the result of running in slow mode. Make sure to start or restart the service
binary before running these commands:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Your mileage may vary (it is a random number generator, after all), but you should eventually see a similar crash which is caused by a timeout in the response of the D-Bus server. We know that this algorithm works; it just needs more time to run. Since a synchronous call won’t work here, we’ll have to switch over to more asynchronous methods…
At this point, we can go one of two ways. We can use the threading
module to spin threads within our process, or we can use the multiprocessing
module to create child processes. Child processes will be slightly pudgier, but will give us more functionality. Threads are a little simpler, so we’ll start there. We’ll create a class called SlowThread
, which will do the work we used to do within the slow
method. This class will spin up a thread that performs our work. When the work is finished, it will set a threading.Event
that can be used to check that the work is completed. threading.Event
is a cross-thread synchronization object; when the thread calls set
on the Event
, we know that the thread is ready for us to check the result. In our case, we call is_set
on our event to tell a user whether or not our data is ready.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
On the RandomData
object itself, we’ll initialize a new thread tracking list called threads
. In slow
, we’ll initialize a SlowThread
object, append it to our threads
list, and return the thread identifier from SlowThread
. We’ll also want to add a method to try to get the result from a given SlowThread
called slow_result
, which will take in the thread identifier we returned earlier and try to find the appropriate thread. If the thread is finished (the event
is set), we’ll remove the thread from our list and return the result to the caller.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
Last thing we need to do is to update the client to use the new methods. We’ll call slow
as we did before, but this time we’ll store the intermediate result as the thread identifier. Next we’ll use a while loop to spin forever until the result is ready.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Note that this is not the smartest way to do this; more on that in the next post. Let’s give it a try!
1 2 3 4 5 6 7 8 |
|
This polling method works as a naive approach, but we can do better. Next time we’ll look into using D-Bus signals to make our client more asynchronous and remove our current polling implementation.
As a reminder, the end result of our code in this post is MIT Licensed and can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part2.
]]>All of this code is written with python3.5 on Ubuntu 17.04 (beta), is MIT licensed, and can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part1.
From Wikipedia:
In computing, D-Bus or DBus (for “Desktop Bus”), a software bus, is an inter-process communication (IPC) and remote procedure call (RPC) mechanism that allows communication between multiple computer programs (that is, processes) concurrently running on the same machine.
D-Bus allows different processes to communicate indirectly through a known interface. The bus can be system-wide or user-specific (session-based). A D-Bus service will post a list of available objects with available methods which D-Bus clients can consume. It’s at the heart of much Linux desktop software, allowing processes to communicate with one another without forcing direct dependencies.
Let’s start by building a base of a simple, synchronous service. We’re going to initialize a loop as a context to run our service within, claim a unique name for our service on the session bus, and then start the loop.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
Make this binary executable (chmod +x service
) and run it. Your service should run indefinitely and do… nothing. Although we’ve already written a lot of code, we haven’t added any objects or methods which can be accessed on our service. Let’s fix that.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
We’ve defined a D-Bus object RandomData
which can be accessed using the path /com/larry_price/test/RandomData
. This style of string is the general style of an object path. We’ve defined an interface implemented by RandomData
called com.larry_price.test.RandomData
with a single method quick
as declared with the @dbus.service.method
context decorator. quick
will take in a single parameter, bits
, which must be an integer as designated by the in_signature
in our context decorator. quick
will return a string as specified by the out_signature
parameter. All that quick
does is return a random string given a number of bits. It’s simple and it’s fast.
Now that we have an object, we need to declare an instance of that object in our service to attach it properly. Let’s assume that random_data.py
is in a directory dbustest
with an empty __init__.py
, and our service binary is still sitting in the root directory. Just before we start the loop in the service
binary, we can add the following code:
1 2 3 4 5 6 7 8 9 |
|
We don’t need to do anything with the object we’ve initialized; creating it is enough to attach it to our D-Bus service and prevent it from being garbage collected until the service exits. We pass in bus_name
so that RandomData
will connect to the right bus name.
Now that you have an object with an available method on our service, you’re probably interested in calling that method. You can do this on the command line with something like dbus-send
, or you could find the service using a GUI tool such as d-feet
and call the method directly. But eventually we’ll want to do this with a custom program, so let’s build a very small program to get started.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
A large chunk of this code is parsing an input argument as an integer. By default, client
will request a 16-bit random number unless it gets a number as input from the command line. Next we spin up a reference to the session bus and attempt to find our RandomData
object on the bus using our known service name and object path. Once that’s initialized, we can directly call the quick
method over the bus with the specified number of bits and print the result.
Make this binary executable also. If you try to run client
without running service
, you should see an error message explaining that the com.larry-price.test
D-Bus service is not running (which would be true). Start service
, and then run client
with a few different input options and observe the results:
1 2 3 4 5 6 7 8 9 |
|
That’s all there is to it. A simple, synchronous server and client. The server and client do not directly depend on each other but are able to communicate unidirectionally through simple method calls.
Next time, I’ll go into detail on how we can create an asynchronous service and client, and hopefully utilize signals to add a new direction to our communication.
Again, all the code can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part1.
]]>Yes. Yes it would.
The first thing to install is snapd itself. You can find installation instructions for many Linux distros at snapcraft.io, but here’s the simple command if you’re on a debian-based operating system:
1
|
|
Ubuntu users may be surprised to find that snapd is already installed on their systems. snapd is the daemon for handling all things snappy: installing, removing, handling interface connections, etc.
We use lxd as our container backend for libertine in the snap. lxd is essentially a layer on top of lxc to give a better user experience. Fortunately for us, lxd has a snap all ready to go. Unfortunately, the snap version of lxd is incompatible with the deb-based version, so you’ll need to completely remove that before continuing. Skip this step if you never installed lxd:
1 2 3 4 |
|
For installing, in-depth instructions can be found in this blog post by one of the lxd devs. In short, we’re going to create a new group called lxd
, add ourselves to it, and then add our own user ID and group ID to map to root within the container.
1 2 3 4 5 6 |
|
We also need to initialize lxd manually. For me, the defaults all work great. The important pieces here are setting up a new network bridge and a new filestore for lxd to use. You can optionally use zfs if you have it installed (zfsutils-linux
should do it on Ubuntu). Generally, I just hit “return” as fast as the questions show up and everything turns out alright. If anything goes wrong, you may need to manually delete zpools, network bridges, or reinstall the lxd snap. No warranties here.
1 2 3 4 5 6 7 8 9 10 11 |
|
You should now be able to run lxd.lxc list
without errors. It may warn you about running lxd init
, but don’t worry about that if your initialization succeeded.
Now we’re onto the easy part. libertine
is only available from edge
channels in the app store, but we’re fairly close to having a version that we could push into more stable channels. For the latest and greatest libertine:
1 2 |
|
If we want libertine to work fully, we need to jump through a couple of hoops. For starters, dbus-activation is not fully functional at this time for snaps. Lucky for us, we can fake this by either running the d-bus service manually (/snap/bin/libertined
), or by adding this file to /usr/share/dbus-1/services/com.canonical.libertine.Service.service
1 2 3 |
|
Personally, I always create the file, which will allow libertined to start automatically on the session bus whenever a user calls it. Hopefully d-bus activation will be fixed sooner rather than later, but this works fine for now.
Another issue is that existing deb-based libertine binaries may conflict with the snap binaries. We can fix this by adjusting PATH
in our .bashrc
file:
1 2 |
|
This will give higher priority to snap binaries (which should be the default, IMO). One more thing to fix before running full-force is to add an environment variable to /etc/environment
such that the correct libertine binary is picked up in Unity 8:
1 2 |
|
OK! Now we’re finally ready to start creating containers and installing packages:
1 2 3 4 |
|
If you want to launch your apps in Unity 7 (why not?):
1 2 |
|
When running Unity 8, your apps should show up in the app drawer with all the other applications. This will all depend on libertined running, so make sure that it runs at startup!
I’ve been making a lot of improvements on the snap lately, especially as the ecosystem continues to mature. One day we plan for a much smoother experience, but this current setup will let us work out some of the kinks and find issues. If you want to switch back the deb-based libertine, you can just install it through apt
and remove the change to /etc/environment
.
These are the goals I set for myself at the start of the year and how I met or missed them:
Major changes in my life, career, and the world at large have made 2016 a memorable year for me. I highly encourage you to reflect on the year you’ve had and think about what you can do to make 2017 great. Happy new year!
]]>In my last post, I demonstrated creating a snap package for an application available in the archive. I left that application unconfined, which is taboo in the long run if we want our system to be secure. In a few steps, we can add the necessary components to confine our pingus
snap.
For reference, this is the original snapcraft.yaml
file for creating a pingus
snap, except that we’ve updated the confinement
property to strict
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
If you’re feeling bold, you can build and install the snap from here, but be warned that this led me into an ncurses nightmare that I had to forcibly kill. That’s largely because pingus
depends on X11, which is not available out-of-the-box once we’ve confined our snap. If we want to use X11, we’re going to need to connect to it using the snap-land concept of interfaces. Interfaces allow us to access shared resources and connections provided by the system or other snaps. There’s some terminology to grapple with here, but the bottom line is that a “slot” provides an interface which a “plug” connects to. You can see a big list of available interfaces with descriptions on the wiki. Our pingus
app will “plug” into the X11 interface’s “slot”:
1 2 3 4 5 6 |
|
You can build and install the new snap with the --dangerous
flag for your local confined snap. After that, you can verify the interface connection with the snap interfaces
command:
1 2 3 4 5 6 7 8 9 |
|
Now, when we run pingus
… it works! Well, video works. If you want sound, we’ll also need the pulseaudio
interface:
1 2 3 4 5 6 7 |
|
Once again: build, install, and run… et voilà! Is it just me, or was that surprisingly painless? Of course, not all applications live such isolated lives. Make note that the x11 interface is supposed to be a transitional interface, meaning that we would rather our app fully transition to Mir or some alternative. To go a step further with this snap, we could create a snapcraft.yaml
to build from source to get the absolute latest version of our app. At this point, we can change our grade
property to stable
and feel good about something that we could push to the store for review.
Any code you see here is free software. Find the project here: https://github.com/larryprice/pingus-snap
]]>Let’s try to create a snap for the game pingus. pingus
is a great little Lemmings clone that we can easily convert to a snap. We’ll start by installing the necessary dependencies for snap building (see the snapcraft website for more):
1
|
|
Now we can initialize a project directory with snapcraft:
1 2 |
|
snapcraft init
creates the following sample file to give us an idea of what we’ll need to provide.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Most of these values for our pingus
snap should be obvious. The interesting markup here is in parts
, which is where we’ll describe how to build our snap. We’ll start by taking advantage of the nil
plugin to simply unpack the pingus
deb from the archive. We define our list of debs to install in a list called stage-packages
. We’ll also define another section, apps
, to tell snapcraft
what binaries we want to be able to execute. In our case, this will just be the pingus
command. Here’s what my first draft looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
Nice, right? Building and installing our snap is easy:
1 2 3 |
|
We used devmode
here because our app will be running unconfined (a topic for another blog post). Now, for the moment of truth! The snap tools automatically put our new app in PATH
, so we can just run pingus
:
1 2 |
|
¡Ay, caramba! We’ve run into a fairly common issue while snapping legacy software: hardcoded paths. Fortunately, the corresponding pingus
executable is very simple. It’s trying to execute a command living in /usr/lib/games/pingus
, which is not in our snap’s PATH
. The easiest way to fix this is to fix the pingus
executable. Since we don’t want to spend time modifying the upstream to use a relative path, we can create our own version of the pingus
wrapper locally and copy it into our snap. The only change to this new wrapper will be prepending the snap’s install path $SNAP
to the absolute paths:
1 2 |
|
Now we can update our yaml file with a new part called env
which will use the dump
plugin to copy our wrapper file into the snap. We’ll also update our command to call the wrapper:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
When you run snapcraft
this time, the env
part will be built. After performing another install, you can run pingus
, and you should be greeted with one of the best Lemmings clones available! Because we’re running unconfined in devmode, this all just works without any issues. I intend to write another blog post in the near future with the details on confining pingus
, so look out for that soon. I may also go into detail on building more complex cases, such as building snaps from source and building custom plugins, or reviewing a case study such as the libertine
snap.
For much, much more on snaps, be sure to visit snapcraft.io. If you’re looking for a published version of pingus as a snap, you can try sudo snap install --devmode --beta pingus-game
, and you can run the game with pingus-game.pingus
.
Source code available at https://github.com/larryprice/pingus-snap.
]]>pbuilder
that exists to help out in these situations. pbuilder
uses a chroot to set up a clean environment to build packages, and can even be used to build packages for systems with architectures different from your own.
Note: All code samples were originally written from a machine running Ubuntu 16.10 64-bit. Your mileage may vary.
Given a typical debian-packaged project with a debian
directory (control
, rules
, .install
), you can use debuild
to build a package from your local environment:
1 2 3 4 5 |
|
This works pretty well for sanity checks, but sometimes knowing your sane just isn’t quite enough. My development environment is filled with libraries and files installed in all kinds of weird ways and in all kinds of strange places, so there’s a good chance packages built successfully on my machine may not work on everyone’s machine. To solve this, I can install pbuilder
and set up my first chroot:
1 2 3 4 |
|
Since I use debuild
pretty frequently, I also rely on pdebuild
which performs debuild
inside of the clean chroot environment, temporarily installing the needed dependencies listed in the control
file.
1 2 3 4 |
|
Alternatively, I could create the .dsc
file and then use pbuilder
to create the package from there:
1 2 3 4 5 6 7 8 |
|
Let’s say that you need to build for an older distribution of Ubuntu on a weird architecture. For this example, let’s say vivid
with armhf
. We can use pbuilder-dist
to verify and build our packages for other distros and architectures:
1 2 3 4 5 6 7 |
|
In some cases, you may need to enable other archives or install custom software in your chroot. In the case of our vivid-armhf chroot, let’s add the stable-overlay ppa which updates the outdated vivid with some more modern versions of packages.
1 2 3 4 5 6 7 8 9 |
|
pbuilder
and chroots are powerful tools in the world of packaging and beyond. There are scripting utilities, as well as pre- and post-build hooks which can customize your builds. There are ways to speed up clean builds using local caches or other “cheats”. You could use the throwaway terminal abilities to create and destroy tiny worlds as you please. All of this is very similar to the utility which comes from using docker and lxc, though the underlying “container” is quite a bit different. Using pbuilder
seems to have a much lower threshold for setup, so I prefer it over docker for clean build environments, but I believe docker/lxc to be the better tool for managing the creation of consistent virtual environments.
Further reading:
Pbuilder HowTo on the Ubuntu wiki Pbuilder tricks from the debian wiki
]]>In many C-like languages, you can fix most of your dependency problems with The Big Three: mocks, fakes, and stubs. A fake is an actual implementation of an interface used for non-production environments, a stub is an implementation of an interface returning a pre-conceived result, and a mock is a wrapper around an interface allowing a programmer to accurately map what actions were performed on the object. In C-like languages, you use dependency injection to give our classes fakes, mocks, or stubs instead of real objects during testing.
The good news is that we can also use dependency injection in python! However, I found that relying solely on dependency injection would pile on more dependencies than I wanted and was not going to work to cover all my system calls. But python is a dynamic language. In python, you can literally change the definition of a class inside of another class. We call this operation patch and you can use it extensively in testing to do some pretty cool stuff.
Let’s define some code to test. For all of these examples, I’ll be using python3.5.2 with the unittest and unittest.mock libs on Ubuntu 16.10. You can the final versions of these code samples on github.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
These are two simple classes (and a custom Exception
) that we’ll use to demonstrate unit testing in python. The first class, Worker
, will work a maximum of 40 hours per week before picketing it’s corporation. Each time work
is called, the Worker
will work a random number of hours. The Boss
class takes in a Worker
object, which it uses as it performs make_profit
. The profit is determined by the number of hours worked multiplied by 1000. When the worker starts picketing, the Boss
will hire a new Worker
to take their place. So it goes.
Our goal is to fully test the Boss
class. We’ve left ourselves a dependency to inject in the __init__
method, so we could start there. We’ll mock the Worker
and pass it into the Boss
initializer. We’ll then set up the Worker.work
method to always return a known number so we can test the functionality of make_profit
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
To run this test, use the command python3 -m testtools.run test
, where test
is the name of your test file without the .py
.
One curiosity here is unittest.mock.create_autospec
. Python will also let you directly create a Mock
, which will absorb all attribute calls regardless of whether they are defined, and MagicMock
, which is like Mock
except it also mocks magic methods. create_autospec
will create a mock with all of the defined attributes of the given class (in our case work.Worker
), and raise an Exception when the attribute is not defined on the specced class. This is really handy, and eliminates the possibility of tests “accidentally passing” because they are calling default attributes defined by the generic Mock
or MagicMock
initializers.
We set the return value of the work
function with return_value
, and we can change it on a whim if we so desire. We then use assertEqual
to verify the numbers are crunching as expected. One further thing I’ve shown here is assert_has_calls
, a mock assertion to verify that work
was called 3 times on our mock method.
You may also note that we subclassed TestCase
to enable running this class as part of our unit testing framework with the special __main__
method definition at the bottom of the file.
Although our first test demonstrates how to make_profit
with a happy worker, we also need to verify how the Boss
handles workers on strike. Unforunately, the Boss
class creates his own Worker
internally after learning they can’t trust the Worker
we gave them in the initializer. We want to create consistent tests, so we can’t rely on the random numbers generated by randint
in Worker.work
. This means we can’t just depend on dependency injection to make these tests pass!
At this point we have two options: we can patch the Worker
class or we can patch the randint
function. Why not both! As luck would have it, there are a few ways to use patch
, and we can explore a couple of these ways in our two example tests.
We’ll patch the randint
function using a method decorator. Our intent is to make randint
return a static number every time, and then verify that profits keep booming even as we push workers past their limit.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
When calling patch
, you must describe the namespace relative to the module you’re importing. In our case, we’re using randint
in the corp.work
module, so we use corp.work.randint
. We define the return_value
of randint
to simply be 20. A fine number of hours per day to work an employee, according to the Boss
. patch
will inject a parameter into the test representing an automatically created mock that will be used in the patch, and we use that to assert that our calls were all made the way we expected.
Since we know the inner workings of the Worker
class, we know that this test exercised our code by surpassing a 40-hour work week for our poor Worker
and causing the WorkerStrikeException
to be raised. In doing so, we’re depending on the Worker
/Boss
implementation to stay in-sync, which is a dangerous assumption. Let’s explore patching the Worker
class instead.
To spice things up, we’ll use the ContextManager
syntax when we patch the Worker
class. We’ll create one mock Worker
outside of the context to use for dependency injection, and we’ll use this mock to raise
the WorkerStrikeException
as a side effect of work
being called too many times. Then we’ll patch the Worker
class for newly created instances to return a known timesheet.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
After the first Worker
throws a WorkerStrikeException
, the second Worker
(scrub) comes in to replace them. In patching the Worker
, we are able to more accurately describe the behavior of Boss
regardless of the implementation details behind Worker
.
I’m not saying this is the best way to go about unit testing in python, but it is an option that should help you get started unit testing legacy code. There are certainly those who see this level of micromanaging mocks and objects as tedious, but there is be benefit to defining the way a class acts under exact circumstances. This was a contrived example, and your code may be a little bit harder to wrap with tests.
Now you can go get Hooked on Pythonics!
]]>There have been a few other good posts about X applications on Unity 8 including this one on dogfooding, this one on Ubuntu Touch, and this one on how it works under the covers. This blog post is explicitly about Unity 8 on desktop using the Libertine CLI, though can be applied to most devices running Ubuntu Touch.
Disclaimer: I work for Canonical on one of the teams making all of this fancy stuff work.
The toolchain we’ll be relying on is called libertine
, and it’s essentially a wrapper around unprivileged LXC and chroot-based containers. We prefer to use LXC containers on newer OSes, but we must continue supporting chroot containers on many devices due to kernel limitations.
For desktop Unity 8, you’ll need the packages for libertine
, libertine-tools
, and lxc
to get started. This will install a CLI and GUI for maintaining Libertine containers and applications.
If you’re running Wily or newer, you can just run the following in your terminal:
1
|
|
Otherwise, you’ll need to add the stable overlay PPA first:
1 2 3 |
|
At this point, if you’re on desktop you can open up the GUI which will guide you through creating a new container and installing applications. Search the Dash (or Apps scope) for libertine
and, given that we haven’t pushed a buggy version recently, you’ll be presented with a Qt application for maintaining containers. I highly recommend using the GUI, because then you are guaranteed not to be following out-of-date console commands.
…But maybe you prefer the terminal. Or maybe you’re secretly SSH’d into the target machine or Ubuntu Touch device and need to use the terminal. If so…
The CLI we’ll be using is libertine-container-manager
. It has a manpage
, a --help
option, and autocomplete to help you out in a jam.
The first thing you’ll want to do is create a container. There are a lot of options, but to create an optimal container for your current machine you only need to specify the id
and name
parameters:
1
|
|
A couple of things to note here: Your id
must be unique and conform to the simple click name regex - this is what will identify your container on a system level. The name
should be human-readable so you can easily identify what might be inside your container. If you don’t specify a name
, your id
will be used. The CLI will likely ask you for a password to use in the container in case you ever need it. You can leave this blank if you’re not concerned with that kind of thing.
At this point, a bunch of things should be happening in your terminal. This will pull a container image for your current distro and install all the requirements to get started maintaining and running X apps. This could take anywhere from a few minutes to the next hour depending on your network and disk speeds. Once you’re done, you can use the list
subcommand to list all installed containers (note you probably just have one at this point). If you ever want to delete your container, you can run libertine-container-manager destroy -i desktopapps
.
Once that’s finished, we can start installing apps. To find apps available, you can use the search-cache
subcommand:
1
|
|
This will return a few strings from the apt-cache of the container with id
“desktopapps” that match “office”. Now, if you want to install “libreoffice”:
1
|
|
This will install the full libreoffice suite. Nice! Similarly, you can use the remove-package
subcommand to remove applications. Don’t remember what apps you’ve installed? Use the list-apps
command:
1
|
|
Maybe you’re an avid Steam for Linux gamer and want to try to get some games working. Since Steam still only comes in a 32-bit binary, you’ll need to enable the multiarch repos, and then you can just install Steam like any other app:
1 2 3 |
|
Steam will ask you to agree to their user agreement from the command line, which you should be able to do easily. If you need to use the readline
frontend for dpkg, you can append --readline
to the install-package
command to enable it.
There are many other commands to explore to maintain your container, but for now I’ll let you check the manpage or open the GUI to explore further.
Now that you’ve installed some apps, you probably want to run them. You can install the Libertine Scope, which will allow you to peruse your installed apps in a Unity 8 session. You can either install it from the App Store on a device (search for “Desktop Apps Scope”) or through apt
on desktop with:
1
|
|
In a Unity 8 session, you can now find the scope and click on apps to run them. Note that there are many apps which still don’t work, such as those requiring a terminal or sudo
; consider these a work in progress.
I’ve been toiling away the past few weeks getting a scope ready which can be used explicitly to install/remove X apps in Unity 8, like the current Ubuntu Software Center (or app store on Touch devices). This scope should be available anywhere the libertine scope is available, meaning that it will alleviate a lot of the pain associated with installing/removing apps for a large chunk of users. Using the Libertine GUI or Libertine CLI will still allow for much more customization, but those tools are largely designed with power users in mind.
Are you able to get libertine working on your system? Can you launch X applications to your heart’s content? Let me know in the comments!
]]>