New releases

I went to ConFoo Montreal 2019

ConFoo is “a multi-technology conference for developers” that takes place every year in Montreal. I went to the 2019 edition, which took place on March 13-15th. Here, I met developers from various backgrounds and countries. I also discussed technologies, practices, business-driven development, legacy code maintenance and ethics in IT. But most of all, I got inspired by many speakers and even gave a talk myself!

I really liked the quality of the organization. Scheduling 155 talks across 3 days along with events is not an easy task. Everything went smoothly thanks to Yann Larrivée and all the volunteers! The place was really nice, speaker setup was good and the food was awesome!

A picture of ConFoo volunteers

The only downside to me was that talks were not recorded. That’s too bad when you have to choose between two promising talks that were scheduled at the same time, in two different tracks. Also, as a speaker, having a recording of my talk would have been really nice for me to share.

But don’t you worry, I’ve got your back! Here’s my recap of the talks I attended to, so you can get a glimpse of what ConFoo is about.

“Asking Better Questions” by Garth Henson

There are many types of questions: rhetorical, loaded, negative… The “questions” we’re talking about are meant to gather information so we can make decisions. Thus, questions we ask should be geared towards communication.

Questions are also bound to a context. A good question gets contextual and relevant information from someone else. Ideally, the question should guide to producing answers that are usable. Who is the information for? Ourself, our management, the engineering team or the end-user? That means a very good question requires preparation.

For example, questions of a technical interview are really specific. What is the goal of our questioning? It depends:

  • What is the level of the role?
  • What will be the candidate primary responsibilities?
  • With whom the candidate will be working?
  • What level of communication will be required for the future employee?

If we’re looking for a Senior Developer, it’s a good idea to check for communication skills too.
From the Senior Developer candidate perspective, a great question to ask during such interview would be: “when, in the role I’m applying to, will I have to implement such algorithm?”. It’s not just about knowledge, but the comprehension of the role.

A final example I really like was about a Tech Lead taking part at a product meeting to gather specification. If he asks “What do you need this product to do?” he will likely have a long, detailed list of required features. What if he rather ask: “What are you trying to accomplish with this product? What problem are you trying to solve?”. This opens the discussion, allowing everyone to find a potentially simpler solution to solve an existing problem.

“API evolution the right way” by A. Jesse Jiryu Davis

Jesse wrote a detailed blog post on the content of his talk, which I’d encourage you to read.

The bottom line of it is: As a project maintainer, you’re like a creator deity. You want your creature to evolve, adapt, grow better. But a good evolution should be a responsible evolution: we should be beneficial to users.

Jesse came up with several covenants you should follow to mindfully evolve your API:

  1. Evolve cautiously
    1. Avoid Bad Features
    2. Minimize Features
    3. Keep Features Narrow
    4. Mark Experimental Features “Provisional”
    5. Delete Features Gently
  2. Record History Rigorously
    1. Maintain a Changelog
    2. Choose a Version Scheme
    3. Write an Upgrade Guide
  3. Change Slowly and Loudly
    1. Add Parameters Compatibility
    2. Change Behavior Gradually

It is a particularly helpful when you maintain a project with numerous consumers, whether it’s open-source or not.

“How do you structure your apps” by Kat Zien

Kat’s talk was certainly the most anticipated and popular of all ConFoo 2019.

Should we put everything under the same namespace? Should we go micro-services or monolith? How much should be shared across components? How do we split our components? At some point, all engineers raise that kind of question. In most languages, there is no “official way” to do it. Also, requirements change and we should embrace it.

Hopefully, there are invariant goals that hold true for a good architecture. It should be:

  • Consistent
  • Easy to understand
  • Simple to change
  • Easy to test
  • Transparent on how it works

Good structure should work for you, not against you. But again, there is no recipe to build a good architecture, it comes with practice.

Although, there are few concrete things you can apply today:

  1. It’s fine to start with a flat structure.
  2. Don’t code for the future. Keep it simple.
  3. When it grows, group things by context. Evolve towards Hexagonal Architecture and have a look at Domain-Driven Design.
  4. Separate the business part from the infrastructure part. Again, evolve towards Hexagonal Architecture.
  5. Keep project configuration files at the root level.
  6. “Be like water” said Bruce Lee. Requirements will change. Don’t expect to get it right the first time. Evolve, adapt, this is fine.
  7. Be consistent across the project.
  8. Prototype and experiment! Good choices come with experience.

“The secrets of Hexagonal Architecture” by Nicolas Carlo—👋 hey, here I am!

The last talk of the day I attended, was mine—it would have been hard to do otherwise!

Nicolas Carlo giving his talk in front of the audience

I presented the details of the Hexagonal Architecture Kat was talking about.

If there is one thing you should retain from my talk, it’s that you should separate the Domain from the Infrastructure if you want to write maintainable software.

The Domain is the business part. At Busbud, we sell intercity bus tickets, worldwide. We talk about things like Seat, Departure, Leg, Stop, Roundtrip… These are specific to our business, and we understand this vocabulary in a given context. We also have Taxes, Fees, Discount Code that you probably have to. This is a more generic Domain for us, but it’s still business.

The Infrastructure part is how is our current solution made. Fastly, Postgres databases, Redis instances, Express server, React… All of these technologies we use to provide busbud.com to travellers across the planet. But we could change the infrastructure, still we do the same business.

The Hexagonal Architecture is the simplest way to do this separation. Put your Domain at the heart of your software. Make your Infra depends on your Domain, and your Domain not depending on your Infra. Use business vocabulary inside the Domain part, to define intention-revealing interfaces (aka Ports). Build concrete implementations of these interfaces in your Infra (aka Adapters). That gives you the flexibility to plug any adapter to your domain.

It makes testing easy. It allows you to start with something simple, and evolve the infrastructure when actually needed. Finally, It’s a good first step towards Clean Architecture.

Here are the slides: https://www.slideshare.net/nicolascarlo1/the-secrets-of-hexagonal-architecture

“FSGD and the Art of Delivering Value” by David Neal

David’s keynote was really inspirational.

David Neal giving his keynote

Before talking about delivering value, he wanted to remind us this: respect for people is the most important thing. It requires you to listen and have empathy for the people you work with. Company culture is crucial, and you’re part of it. If the system rewards the bad behavior, this bad behavior will continue happening. Remember that Edwards Deming said: “A bad system will beat a good person every time”.

Then, David presented FSGD—pronounce “Fizz Good”. It’s a thinking tool, to help you make better decisions.

It stands for:

  • Frequent
  • Small
  • Good
  • Decoupled

Frequent

It’s not just about speed, but about consistency. Consistency is key to be reliable.

Release frequently allows you to get feedback, so you can take decision to change priorities and to focus on what is important. Adapt to this feedback is key for greatness. It allows you to ship 80% of what matters to the customer quickly. It’s generally better than working 3 months to get the 100%. That makes your customer trust you.

They become forgiving if you do mistakes because they trust you’ll fix them frequently and you’ll finally get it right. Think about Microsoft: they went from a 3-year release to weekly updates.

Small

To release frequently, you have to release small.

Some things are notoriously big. But even in such cases, think about the problem. How could you manage to ship more frequently? Dig into the root cause of what make things big. Challenge this.

When things start to get bigger than expected, stop and think: what is happening? Can you ship something smaller?

Good

Good is subjective.

There’s a saying: “if you’re not embarrassed with the first version of the software, you launched too late”. Think about the first iPhone: it didn’t have any MMS / GPS / 3G / SDK / App Store / Copy-paste features. Still, it was “good enough”. Define what is “good enough” for your software.

A definition of “good enough” David uses relies on TLDR:

  • Tested. You need to feel confident it’s automatically tested. Even for MVPs.
  • Logged. You need to measure usage of a feature to take decisions to measure.
  • Documented. You need to know how to set it up / build / test / use.
  • Reviewed. You need to share the knowledge and have someone else have a look at it.

Decoupled

Decoupled is the contrary of coordinated: it should not have a lot of friction to integrate with the system

If you want to move fast, you have to be autonomous. You don’t want to stress every time you release because you’d badly impact another team.

The release of Windows 95 was a very coordinated change. It was big-bang event. It took months, at least. Teams should be able to release independently from each other.

Feature flags can help there. You can ship a new feature dark: the code is merged in the main branch, still the feature is not released yet. That decouples code shipping from features releasing.

Finally, as David said: you don’t need permission to be awesome.

“Writing code you won’t hate tomorrow” by Rafael Dohms

While David talked about delivering new value, Rafael asked this question: how do you deal with the code that was written yesterday.

It’s very common to hear that “we should rewrite all this code.”. But how the result would be different if the process is the same? It’s also common to hear that “real developers ship stuff, clients don’t pay if you don’t deliver”. But, “just delivering” is not enough. If what you build collapse, it’s terrible. Do you really want to spend your time on-call, stopping fires? If you don’t, you need to keep the quality up and stop wasting time redoing the work.

It’s good sign you hate the code you wrote yesterday: it means you learned new things! So you need to be prepared for changes and there is no silver bullet. You should refactor as you go. Also, testing is a pre-requisite for safe refactoring. You want to be able to change code with confidence. Automated testing was a thing in 1975, what’s your excuse?

When you write code, you’re not writing for the computer. In fact, you’re writing for others developers to understand what you want the computer to do. Thus, you should think about the readability of the code. Clean code is important.

Exercises to practice

Finally, you can enhance your skills in writing good OO code by practicing regularly the 9 Objects Calisthenics:

  1. Only one indentation level per method
  2. Do not use else
  3. Wrap primitive types, especially if they contain behavior
  4. Only one attribute accessor per line
  5. Do not abbreviate names
  6. Keep you classes small (~100-200 lines of code)
  7. Limit your instance variables to 2, don’t inject too much in classes
  8. Use first class collections
  9. Don’t use getters and setters

These are exercises, not rules you shouldn’t violate. They will train you in writing better code. Try them for a few month on a side-project, or a little project at work.

“Discovering unknown with Event Storming” by Mariusz Gil

Putting the Domain at the heart of your software helps in creating ambitious applications. Event Storming is how we capture the knowledge of the Domain. To do so, you need a simple tool (sticky notes) and infinite space (a big table or wall would do).

Event Storming is all about enabling communication between the business experts and the developers. To quote Alberto Brandolini: “it’s developer (mis)understanding, not expert knowledge that gets released into production”.

So you make the people meet and tell the story of the product, starting with the facts: events. You collect all events that people could think of. All the things that happen in the system. Then, you re-order the events. You group them together, establish happy paths, ordering them chronologically. Next, you identify the actors, the other systems interacting. Finally, you handle the unhappy paths, which is usually a great moment to realize what’s missing.

Sticky note after sticky note, you understand how the system works and grows. If you separate the Domain from the Infrastructure, you might realize that some places of the system don’t actually need a Postgres database to work. You realize the database becomes merely a storage solution, and not the (complex) heart of the software.

Event Storming grows our understanding of our business

Why is Event Storming so popular right now?

That’s because it can save a lot of work that shouldn’t be done. When you make bad decisions with naming, caching, architecture… It costs a lot! Communication with all stakeholders of the project is key to avoid bad decisions.

Event Storming is a good friend with Hexagonal Architecture, DDD, CQRS and BDD. It seems that the trend is to make the software development focus on the business it solves, more than the exact tech it uses. It pushes developers to focus on the problem space more than they usually do. Software development becomes a process, working code is a side-effect.

Finally, Event Storming is not just for greenfield project. It’s really helpful to capture existing behavior and identify misunderstanding.

Slides & Video

Slides are available at https://speakerdeck.com/mariuszgil/discovering-unknown-with-eventstorming-confoo.

Mariusz’s talk was not recorded during ConFoo, but it was at another conference he gave earlier. Here it is:

“Hacking Web Performance” by Maximiliano Firtman

You probably already know that a slow website reduces the conversion rate. Hence, performance is an important subject.

Still, mobile usage tends to be underestimated for performance. The average time to load a landing page on mobile is 22s, when 53% of users leave after 3s. And the main problem is not bandwidth, it’s the latency.

To be extra-performant on the first load, you should avoid doing more than one roundtrip. Thus, avoid HTTP redirects to HTTPS. You can use HSTS header to tell the browser it should use HTTPS next time. You can even opt-in at https://hstspreload.org/, so browsers will already know about that!

Then, deliver above-the-fold (ATF) content in less than 14.6 KB. It should embed all CSS and JS needed for the ATF content to work if the user interacts. If you still have space, include the logo and/or low-res images—you might base-64 encode and inline them.

If you want to address bandwidth, here are a few tools you can use:

  • Zopfli. It would save 3-8% of data transfer compared to GZIP. It’s totally compatible and requires no change for the client.
  • Brotli. It would save ~25% of data transfer compared to GZIP. But not all browsers are compatible and it needs the client to check the encoding header.

preconnect and preload also help the browser to warm up and save ~200ms for DNS query on mobile. It’s especially useful if you’re loading resources from different hosts (CDN, Google Fonts…). Tell the browser right away.

For images, Zopfli can improve by 20% the size of your PNGs.

Finally, don’t forget to measure performance from the end-user perspective. User-centric metrics like “First Meaningful Paint”, “First Interactive” and “Visually Complete” are the ones to focus on.

“Forgot password? Yes I did!” by Joel Lord

Password restrictions tend to make passwords worse as people need to change them frequently. So they choose weak passwords. It also costs money to the company to do so. And it has many breaches! In 2017, 2.6G passwords were compromised. The more computing power we get, the more easier it is to crack a password. Social media presence make it easier to do social engineering: secrets questions are usually a bad security practice since you can find most answers online!

Passwords Managers are a solution, but they add friction. So it’s hard to convince non-technical persons to use them.

As developers implementing an authentication system, we should:

  • Follow documented best practices and frameworks
  • Delegate, don’t try to create our own auth system
  • Use 2FA to increase the security of our users

But there is another way: forget the passwords! Let’s see a couple of alternatives.

WebAuthn

WebAuthn which is a very recent standard. The global idea is to generate a key-pair credentials. The user doesn’t know the password. You need a physical key (authenticator) to make it work, but it can work with TouchId sensors too. You can think about it like allowing the user to connect your website using their fingerprint.

It’s an official W3C standard and recent browsers support that.

The downside is that you need a key or a fingerprint sensor to make it work.

Biometrics

Like face recognition or fingerprints. It’s becoming more and more popular. Today, even voice recognition works

But it’s easy to trick (e.g. with pictures), even if there are countermeasure (e.g. ask the user to look left, then right).  It’s hard to trick biometrics at scale.

The main downside is for the user: if it happens he needs to change his biometrics, so the impact is terrible.

Magic links

Joel’s favorite ones. Slack introduced that a while ago.

The implementation is fairly simple. Every time there is a request, you generate a magic link that you store. Then you email the magic link to the person. A link should only works once to authenticate the user. Just don’t forget to expire them after some time, or you’ll have a security issue!

It’s quite easy to use and implement.

“But what if your email is compromised?” you might ask. Joel’s answer is: “well, you’re quite screwed anyway since all websites have a recovery mechanism using your email,  you’d have the same issue today”.

“It’s not your parents’ HTTP” by Gleb Bahmutov

Gleb went back in history to tell us about the HTTP specification.

The original spec of HTTP/0.9 was short—like 650 words long! It didn’t have error codes, cookies… But it solved problems they have at that time. The spec was based on TCP/UDP to communicate, so you could retrieve information from references—which really mattered for academic researchers (e.g. “publish or perish”). 

In 2 years, it went from “unknown” to “used everywhere”. Thus, new problems arose! In 1996, HTTP/1.0 added a bunch of non-academic, needed features like non-HTML data (images!), HEAD and POST methods, status codes and user preferences (User-Agent). In 1999, HTTP/1.1 added few scaling best practices (security, performance and usability).

AJAX and XMLHttpRequest were responses to: “hey, what if we don’t reload the whole page when only a portion should be updated?”. Then it scaled to jQuery, Angular.js and web-components-like libraries of today (e.g. React, AngularJS, VueJS).

Most securities issues we have today (e.g. XSS) come from thinking the incoming data is valid. HTTP wasn’t designed with malicious data in mind. Today, HTTPS has now replaced HTTP. Services like Let’s Encrypt helped making that easy!

Since HTTP was built upon TCP, it has its guarantees… and limitations! The main limitation being performance. It’s not a bandwidth problem, but a latency one! Every new connection has an overhead.

Latency vs. Bandwidth impact on Page Load Time showing latency is the limiting factorSo what if we only open one connection to pass many files? That’s the promise of HTTP/2!

Schema showing HTTPS connection is expensive because TCP + TLS handshake

Schema showing how HTTP/2 Push solve the issue by minimizing the number of handshakes

It can also prioritize files.

HTTP/2 Push is promising for performance. But it doesn’t play nice with cache. The main issue is: if a package gets lost, all the thread gets blocked (because TCP). And so performance decrease a lot. Thus, sometimes, it behaves worse than HTTP/1.1. Gleb’s advice is to wait before adopting HTTP/2 Push.

HTTP over QUIC with HTTP/3

The only alternative is to use the other protocol: UDP. It doesn’t guarantee packages delivery, nor ordering, so this needs to be done by hand. This is what QUIC is built upon. And Chrome is already using that under the hood. Although, it’s a lot of manual, custom work. They might afford it since they control a lot of elements of the requests (browser, OS, server…). It will be part of HTTP/3, but we don’t know when that would be actually ready to be used in production.

Gleb’s slides are available at https://slides.com/bahmutov/http-confoo.

 

And that’s it! I really enjoyed being there, hearing about these different subjects and meeting new people. ConFoo Montreal 2019 was great, and I’m looking forward 2020!