Development Archives - Qvik https://qvik.com/tag/development/ Creating Impact with Design and Technology Wed, 28 Aug 2024 11:16:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://qvik.com/wp-content/uploads/2022/05/cropped-Qvik_Favicon_512x512-32x32.png Development Archives - Qvik https://qvik.com/tag/development/ 32 32 How does Electrolux create its scalable world-class IoT solutions? By bringing software to the core of the company strategy. https://qvik.com/news/how-does-electrolux-create-its-scalable-world-class-iot-solutions/ Tue, 13 Jun 2023 11:21:01 +0000 https://qvik.com/?post_type=qvik_story&p=4636 The rise of IoT has turned the traditional hardware company Electrolux into a global software company as well. This has required changes in the corporate structure and, for one of the biggest global appliance companies and the owner of several well-known appliance brands, such changes don’t happen overnight.

The post How does Electrolux create its scalable world-class IoT solutions? By bringing software to the core of the company strategy. appeared first on Qvik.

]]>
During the past ten years, Electrolux has gone through three phases of IoT. At the start, every connected appliance had an IoT solution specially tailored for it. Usually, the solutions were delivered as apps of their own and they would basically be used as a remote control.

In the second phase, Electrolux built various ecosystem apps each serving a certain type of appliance, like Electrolux’s Wellbeing appliances. Several experiences were still being developed in parallel for different product categories and business areas.

“In these phases, our IoT work didn’t scale well, a lot of similar problems were being solved in different ways, and our digital products did not always meet the expectations of our customers,” says Andreas Larsson.

Larsson is the Engineer Director of Digital Experiences at Electrolux and is responsible for the engineering solutions for all user-facing digital products, brands and business areas globally.

“Financially, it was a big decision to get started with the renewal, since we still needed to maintain all the old applications until we could replace them, and we also needed to keep releasing new appliances during the process.”

At the moment, the company is merging all of its smart home appliance applications under a single codebase and cloud implementations. By the end of the year, the global codebase will support eight apps: Electrolux, AEG and Frigidaire will have one iOS and Android application for all their appliances, and all the other brands will be combined in the +home app for iOS and Android.

Read more about Electrolux’s IoT journey from our previously released reference article Electrolux and the evolution of IoT.

Establishing a digital product organisation within Electrolux

To make their digital product development smarter and more efficient, Electrolux established a digital product organisation within the company. This required changes in organisation, processes and talent.

There are now more than 210 people developing Electrolux’s digital products, primarily in Sweden, Italy and Malaysia. The people are divided into more than 15 teams, and the user base is expected to exceed 30 million users in a few years.

“Recruiting such a large number of competent people was a challenging task, and that’s when Qvik came into the picture.”

Andreas Larsson, Engineer Director of Digital Experiences at Electrolux

Qvik has been working with Electrolux since the spring of 2021. During this time, the teams have made important decisions on technology choices and kept pace with the constantly evolving world of connected appliances and experiences.

“We develop our products in-house with the help of consultants integrated into our teams, and we are constantly looking for new talent.”

As a successful company with over 100 years of history and its own ways of working, the corporation has also had to adapt to the new agile ways of software development.

“For instance, hardware manufacturing methods don’t apply well to software development, and the budgets need to be planned differently as well.”

Biweekly release trains and clear OKRs keep things rolling

Larsson explains that Electrolux manages the complexity of its digital product development by sharing a set of principles and ceremonies for the teams. The teams work according to a set of agile methods but are not strictly committed to a single one.

“We also have some criteria for how the teams can be put together. For instance, teams can’t be scattered across more than two time zones and physical locations.”

Electrolux has found a global biweekly release train to be very useful: a release goes out at a set time every other week. If you miss the train, you must wait for the next one.

The quality of the work is expected to be production-grade when the release train starts, and there are structured quality and decision-making processes all the way to releasing the apps to app stores. The teams have now been making biweekly releases since March 2022.

“Without the rigour of the trains, someone would always have to fix just one small bug before the release, and we now have too many teams not to release the value provided by all of the others on schedule. ”

Andreas Larsson, Engineer Director of Digital Experiences at Electrolux

Electrolux’s digital product development has set clear objectives and key results and figures to keep the teams on the same page. This helps the teams measure their success and maintain the quality of the products.

“In software development, knowing exactly what is going on when something is released is a great asset. For instance, as we instantly know if the app’s login time goes from 1 to 4 seconds, we can start fixing it right away. ”

The sheer number of teams and individuals working with Electrolux’s digital products also requires a balancing act: the teams need to have autonomy in their work but, at the same time, having no overarching or shared direction will lead to too much local optimisation.

Join Qvik Sweden’s next Digital Product Meetup?

This article is based on Qvik Sweden’s Digital Product Meetup held on June 7. The DiP meetups are a place for product managers, product owners and people in product management to discuss and learn about relevant themes.

The next event will be held after the summer holidays. If you are interested in joining Qvik Sweden’s next DiP meetup, please leave us your contact details in the form below and we will send you an invitation at the end of the summer.

If you are based in Finland and would like to join Qvik’s DiP Meetup in Helsinki, leave your contact details to the Helsinki form and we’ll invite you to the next one.


DiP Meetup Stockholm

If you wish to get an invitation to the next DiP Meetup in Stockholm, please leave your contact information and we’ll get back to you.

DiP Meetup Helsinki

If you’re not yet on the list and wish to get an invitation to the next DiP Meetup in Helsinki, please leave your contact information and we’ll get back to you.

The post How does Electrolux create its scalable world-class IoT solutions? By bringing software to the core of the company strategy. appeared first on Qvik.

]]>
Web push notifications on iPhone and iPad are finally here – but you still need a PWA to use them https://qvik.com/news/web-push-notifications-on-iphone-and-ipad-are-finally-here-but-you-still-need-a-pwa-to-use-them/ Tue, 28 Feb 2023 06:57:39 +0000 https://qvik.com/?post_type=qvik_story&p=4148 Apple has just enabled web push notifications for Progressive Web Applications (PWAs) in iOS & iPadOS 16.4 beta 1. The release is still in beta, but you can start testing the feature right now.

The post Web push notifications on iPhone and iPad are finally here – but you still need a PWA to use them appeared first on Qvik.

]]>
Web push notifications were one of the few things that you couldn’t do without an app on iOS. To get them working, you need to add the web page to your iOS Home screen and approve push notifications.

Google has supported web pushing and other PWA features on Android for years, so you could say it’s about time.

Safari 16.4 Beta brought web push notifications to PWAs on iOS Home screens.

While Apple has never publicly used the term ‘PWA’, they’ve been supporting the technologies to make PWAs installable and offline-capable on Safari for iPhones and iPads with limited features since 2018.

PWAs offer a compelling alternative to traditional web apps and native mobile apps, with benefits including improved user experience, increased engagement, greater reach, lower development costs and improved discoverability.

There are still limitations in iOS. For example, web apps can only store offline data and files totaling a maximum of 50 MB. They don’t have access to some hardware features, such as Bluetooth, and they can’t execute code while in the background (Background Sync).

Ready to support push notifications on your PWA?

PWAs have several benefits, including SEO friendliness, ease of maintenance, effective security and many more, and will soon include push notifications in iOS as well.

According to Statista’s 2021 study, around 17% of North American and European e-commerce companies planning on investing in PWAs will introduce them or have already done so and 28% are still evaluating the matter.

You can boost your user engagement with push notifications. But the most important thing is to use them wisely and in a way that benefits the user. I also recommend reading my colleague Aija Malmioja’s article Notifications are better than ever. But don’t push it. to help you plan your best push notification strategy.

To start using web push notifications, you obviously first need to have a PWA. When you have a PWA and a plan on how to use web push notifications, all you need is the technical implementation.

What’s your status with PWAs and web push notifications strategy? We’d be more than happy to continue the discussion and do a demo or POC for your needs. Feel free to contact us!

Further reading and resources:

Illustration: Midjourney AI

The post Web push notifications on iPhone and iPad are finally here – but you still need a PWA to use them appeared first on Qvik.

]]>
How to estimate a budget for your mobile app? We did the maths and can show you. https://qvik.com/news/how-to-estimate-a-budget-for-your-mobile-app-we-did-the-maths-and-can-show-you/ Tue, 21 Feb 2023 13:32:16 +0000 https://qvik.com/?post_type=qvik_story&p=4063 Qvik’s state-of-the-art calculation tool can help you choose the right mobile technology for your mobile application. Check out the example and schedule a meeting for consultation.

The post How to estimate a budget for your mobile app? We did the maths and can show you. appeared first on Qvik.

]]>
When it comes to mobile app budgets, surprises are common and usually not pleasant. The reasons behind these surprises are often related to mobile technology choices: if you fail to consider the time frame, life cycle, maintenance costs or upcoming integrations and features of your app closely enough when making the tech choices, your calculations can be too optimistic.

Native, hybrid and cross-platform solutions all have their pros and cons. If something sounds too good to be true, it probably is, and sometimes the solution that sounds expensive at first can be the most cost-effective decision in the long run.

Our new calculation tool can help you avoid common pitfalls and calculate a realistic budget for your app’s mobile development. In addition to the mobile estimates provided by the calculation, you naturally have to factor in design work and back-end development.

Example of a mobile budget estimate

This example is a total fabrication – we made it up. But it could just as well be based on a true story, since stuff like this happens all the time.

In this fictional scenario, Acme Corporation is planning to develop their first mobile application. Their industry isn’t relevant, but they want the mobile application to bring them a new customer channel to boost their sales.

The CTO and PO of Acme Corporation are discussing a roadmap for 2023. The discussion revolves around the following topics: which features do we need, what is the timeline for expected releases and what would be a realistic budget.

They begin the journey by deciding that the first release of the application should be ready in three months from the start of development. By this time, the application would have a limited amount of features flagged as the most important.

After the first release, they will continue the journey with upgrade one and upgrade two, which are both estimated to take three months. When these major updates are done, the team goes into continuous development and does minor changes and upkeep for the application.

The CTO and PO of Acme Corporation understand that the development project ahead involves three variables:

  1. Budget
  2. Timeline
  3. Features

They also understand that not all of these variables can be anticipated accurately. The PO and CTO need to decide which of these variables are negotiable and which aren’t.

They decide that the budget is something that needs to be fixed, as they have received a framework from their Board that they need to follow. This leaves room for the timeline and features to be adjusted in an agile way during development.

Creating the MVP, MLP, 1.0 – or whatever you want to call it

The CTO and PO start looking at the first release with the mentality that they need to get it out in three months and with a reasonable price tag. They look at the technology choices currently available. They identify three potential frameworks:

  • Native (Swift/Kotlin)
  • Cross-platform – Flutter
  • Cross-platform – React Native

They could create the first release in three months with both Cross-platform and Native, but the costs would be very different.

Analysis of the first release and first price tags

The first release consists of features that are easy to implement and lay the foundation for the whole project.

  • Cross-platform offers the customer an opportunity to start with a smaller team, which brings cost benefits for the first release. This team will also have a mobile architect that can assist in development.
  • Native will require the team to have two Swift and two Kotlin developers to ensure continuity. They will be supported by a mobile architect  proficient in both platforms.

The Native team’s burn rate for the three months is roughly 230k euros, while the cross-platform team gets away with just 150k. It’s easy to see that cross-platform is most definitely the way to kick off the project.

Right? Keep reading.

Upgrade 1 and the need of new integrations

The Upgrade 1 pack creates some challenges in the cross-platform team’s roadmap. They need to integrate an external SDK that doesn’t have support for React-Native/Flutter. Our CP team has two options:

  1. Bring additional Native developers into the team (one per platform)
  2. Find developers that can do both (Cross-platform and Native)

In our simulation, the team hires two additional developers to do the integration, which doubles the team size and thus also the costs. 

A Native team can be smaller by dropping out the architect and adjusting the seniority of the team. As the bulk of the challenging work has been done, the team can easily adjust their seniority towards a medior/junior combo. This brings the development burn rate down by a significant margin.

From the graph below, you can see that the development costs intersect and cross-platform development actually becomes more expensive as complexity increases during the project, whereas the native team approach has a steady downward curve. 

Development cost of a mobile app, taking into account release cycles and technology choices.

Cross-platform is still cheaper than Native development at this point. Easy decision, right? Let’s move on.

Upgrade 2 and continuous development

Upgrade 2 doesn’t bring any new challenges for the team as feature development continues. The Native team drops the architect and settles for the one medior and one junior developer per platform approach. This ensures continuity, as having just one developer could be a risk in the event of illness or other absence.

Cumulative cost development of a mobile app, taking release cycles and technology choices into account.

The Cross-platform team’s developers learn Native so that they are able to maintain and develop the modules that didn’t have cross-platform support. Team size is maintained at three medior developers and one junior.

After Upgrade pack 2 is completed, the project shifts to continuous development mode. During this time, the team upgrades current features and adds new minor features to the application.

Did our fictive Acme Corporation make the right tech decisions?

In conclusion, while cross-platform app development may seem like the cheaper option in the short term, it’s important to consider the long-term costs and potential limitations before deciding which approach to take. It’s not always about the cost, but the big picture of the mobile application in your company’s strategy.

In our fictional scenarios, Acme could go both ways with their technology choice, as you always need to evaluate the situation on a case-by-case basis. In some cases, native app development may be the better choice for businesses that want to create high-performance apps with access to all the features of the device and the operating system.

In other cases, however, cross-platform development is the best choice for businesses that want to minimize the cost of app development and keep developing their app in the long run.

Having a good partner to help you with these questions can save you a lot of money and effort.

Qvik’s top hints on what to consider when choosing the technology for your mobile application project:

  • Long-term costs when it comes to development and upkeep
  • Target customer experience level
  • Availability of developers now and in the future; in-house recruitment and/or outsourced resources
  • Go-to-market schedule on the necessary platforms
  • Production stability and easy updating
  • Evaluation of features; current and future
    • Utilizing native support, new features, dependencies on third parties
  • Performance of the chosen technology

Want to hear more? Contact Juha Falck, juha.falck@qvik.com, if you are interested in evaluating your upcoming mobile application project. We would be happy to simulate your project and give you some recommendations.

The post How to estimate a budget for your mobile app? We did the maths and can show you. appeared first on Qvik.

]]>
Will Kotlin Multiplatform Mobile change the game for native, hybrid and cross-platform decisions? https://qvik.com/news/will-kotlin-multiplatform-mobile-change-the-game-for-native-hybrid-and-cross-platform-decisions/ Thu, 19 Jan 2023 09:48:03 +0000 https://qvik.com/?post_type=qvik_story&p=3890 Kotlin Multiplatform Mobile (KMM) is a Software Development Kit (SDK) that lets you create applications with a common business logic and native UI components for iOS and Android. In KMM, you share all the common code and create the native UI with the tools that you are used to.

The post Will Kotlin Multiplatform Mobile change the game for native, hybrid and cross-platform decisions? appeared first on Qvik.

]]>
The KMM is the mobile part of the wider Kotlin Multiplatform (KMP) SDK, which you can use to develop applications also for JVM, JavaScript, Windows, Linux, macOS, watchOS and tvOS.

KMM is the new alternative to Native and Cross-platforms like Flutter and ReactNative. The idea is to get the best of both worlds – shared code and native components and speed.

A few years ago, we wrote an article about choosing the right app technology, which compared the benefits of native, hybrid and cross-platform. The article is still otherwise pretty much up to date, but KMM was not yet an option back then, so this article will complement that story.

Benefits of Kotlin Multiplatform Mobile

There are lots of benefits to the KMM. The most important one is that you have the same implementation for features, so they work exactly the same and not just almost the same. Then you also have the same bugs and only need to fix them in one place.

When developing the UI the native way, you also get the latest and greatest UI, just as if you were building the app as two separate native applications. You are not limited to using the framework components and don’t have to wait for the frameworks to start supporting something that Google or Apple have just released in their latest updates.

In one KMM project of this kind currently in production, 52 percent of the code is shared. There are 106k lines of shared code, 55k lines of Android code, and 42k lines of iOS code.

Current problems

There are still some problems, mainly in the shared interface, or how the shared code (Kotlin) is presented on the iOS (Swift) side. You have to make some wrappers and not all the Kotlin features are supported on the iOS side.

KMM is still in beta, so you can wait for it to improve. Making KMM stable is the top priority on the latest Kotlin roadmap update from December.

iOS developers have to learn a new language similar to Swift when they do development in shared code. When using iOS, the project can currently need a complete rebuild, which takes a long time. You can expect these things to improve later on. There are also still some limits on iOS application debugging in shared code.

Differences between Android and iOS

The shared code is created with Kotlin, which is more familiar to Android developers. On iOS, you have to use the native library interface to access the shared code. The interface is built in Objective-C instead of Swift, which makes it old-fashioned.

Android development is pretty close to what we are used to, we just have to take more care with the shared interface, because both platforms will be using it. The only difference is that the libraries you use on the shared side must support KMM.

You need to decide whether to develop the shared code as a library with versioning or develop both platforms at the same time in the same repository. This really depends on your team size and individual expertise.

Comparison to cross-platform

In cross-platform development, you use the tools and APIs provided by the framework. The framework is responsible for creating the UIs for different devices. This way, you have to wait for the framework to start supporting the latest features of that platform.

With KMM, you are always able to use the latest features. Cross-platform has the benefit that you only need to write the UI once, but remember that it may not feel optimal and native. As another benefit, you can do everything with a single language and tool set in cross-platform development.

When do I start the rewrite?

As mentioned, the best thing about KMM is that you don’t have to start developing your current native applications from scratch.

You can add it to your current native implementations and create new features using shared code. This way, you can add the shared code to your project gradually. And if you don’t like it, you can always switch back to the old way and write everything separately.

Here is a good starting point for developers who want to start developing their mobile app with KMM.

Conclusion and further reading

Despite the fact that this is a new technology and not yet widely known, Kotlin Multiplatform Mobile is a valid option that we believe will become a strong player in the field.

KMM is improving rapidly and worth considering whether you are developing an application from scratch or adding new features or improvements to an existing application.

Illustration: Midjourney AI

The post Will Kotlin Multiplatform Mobile change the game for native, hybrid and cross-platform decisions? appeared first on Qvik.

]]>
Learning Flutter with a React Native background https://qvik.com/news/mobile-development-learning-flutter-with-a-react-native-background/ Tue, 10 Jan 2023 12:12:57 +0000 https://qvik.com/?post_type=qvik_story&p=3848 This article will focus on the differences between two cross-platform frameworks and the reason why Flutter is growing rapidly in the developer community.

The post Learning Flutter with a React Native background appeared first on Qvik.

]]>
Google developed Flutter in 2017 as a complete software development kit which can be used to create applications on multiple platforms. It was designed to build natively compiled, high-performing and appealing user interfaces (UI) quickly.

While React Native uses a popular JavaScript, Google’s Dart is the main programming language for Flutter. It is strongly typed with nullable value, class-based, and supports named parameters in function.

According to Github, Flutter has approximately 139K stars in May 2022 while React Native (RN) has around 102K stars.

One codebase for five operating systems

In addition to the Android or iOS in RN, Flutter can share its code across other operating systems such as Windows, MacOS and Linux. This is also the ultimate choice if a developer wants a solution that works on both mobile and web.

To optimize Flutter’s web support, it should be used in one of these cases: Progressive Web Apps (PWA), Single Page Applications or apps based on existing mobile versions. Interestingly, Flutter can also be used in embedded devices like vehicles or TVs. For example, Toyota has used Flutter for the built-in entertainment system of its cars.

Animations as a plus

Handling animation is one of the most noteworthy advantages of Flutter at the moment. Whereas React Native has two default animated systems and a third-party library is required to implement complex transitions, Flutter has its collection of animations and some widgets come with their own motion effects.

Furthermore, Flutter includes Skia as a 2D graphic library, so it can process pictures faster and smoother. In general, Flutter is useful for transparent  situations like delay or reverse, but React Native would handle flexible value controlling better.

Advantage of using UI widgets

I mentioned that Flutter uses ready-to-use and fully customizable widgets  to display user interfaces. Therefore, it would save lots of time to focus on functionality rather than use a library or build components from scratch as in React Native. Dropdowns or menu widgets would be an obvious example.

The application would look identical on both Android and iOS, but if a native appearance is required, Flutter has Material Design and Cupertino to support styling respectively.

Their UI widgets make Flutter applications resilient to system updates, while React Native might have some issues with native elements. Though the problem rarely occurred, I felt that it would be worth highlighting in this article.

Plugins developed by Google

Because Flutter was developed by Google, it inherited a number of the company’s advanced features, such as sensor use, GPS, Bluetooth and mapping. It makes this framework the optimal choice for apps that do not allow third-party libraries.

However, React Native inevitably dominates in the number of plugins due to its earlier release and big community. In addition, applications that involve less tracking or location work best with RN thanks to its fast iteration cycle and ease of use.

Time compilation

There are two compilation methods: just-in-time (JIT) and ahead-of-time (AOT). RN uses the first one, which compiles when the app is starting and allows it to execute a dynamic block of code. Since Dart in Flutter is AOT-compiled, the app is processed before running in order to reduce application size, optimize performance and error detection. Flutter also uses JIT in development mode for hot reload.

CI/CD support

Flutter has detailed documentation on continuous integration and deployment to both Google Play and App Store. The work is easily done via CLI, and third-party libraries can be used if needed for complicated situations. Despite the huge community, RN has poor deployment instructions, especially for iOS. Libraries such as Bitrise are recommended to automate the process.

Testing and debugging

Since Flutter uses the same code for both mobile platforms, it requires fewer tests than React Native. Flutter also offers complete, built-in automatic testing tools including UI, widgets and integration.

On the other hand, React Native mostly focuses on the unit level and is lacking in other support. Some popular testing libraries for it are Jest and Jasmine while test, fluter_test and integration_test are popular with Flutter.

Regarding debugging, the contest is tied between the two platforms. Both have hot reload, which immediately reflects the changed UI and the progress of development. Each platform has its own built-in debug tools along with IDE.

However, it has been said that RN’s debugging system is more cumbersome, since problems can come from either JavaScript (code) or the native side (third-party libraries). While in Flutter, Chrome browser (or DevTools) and OEM debugger can go through the codebase step by step with breakpoints and expose the error. Flutter inspector and Flutter outline also help by displaying the widget trees or build method.

Final thoughts

Deciding between Flutter and React Native for your next project would depend on many factors, such as development time, performance and stability. Flutter seems to have huge support from Google with many built-in tools, and though the community is smaller than React Native’s, it is growing rapidly.

Flutter would work optimally with apps which require animation rendering or UI-focused, cost-effective, smooth and integrated solutions. Google Ads, Ebay, Alibaba or Hue by Philips are successful examples of Flutter development. RN is more for web developers due to JavaScript and can be useful for 3D animation.

Most of the mobile developers at Qvik would like to learn Flutter as an extra skill because of its popularity and rapid growth. We have experts in both native and cross-platform development, and are more than happy to help you with your digital solutions. Check out some of our projects here.

Further reading and resources

Ilustration: Midjourney

The post Learning Flutter with a React Native background appeared first on Qvik.

]]>
Getting objective results with automated UI testing https://qvik.com/news/getting-objective-results-with-automated-ui-testing/ Mon, 14 Nov 2022 08:32:48 +0000 https://qvik.com/?post_type=qvik_story&p=3705 Automated tests help the development team at least as much as the product owner. They help you to stay focused, make onboarding much smoother and even foster a positive work environment.

The post Getting objective results with automated UI testing appeared first on Qvik.

]]>
In my previous article What is automated UI testing and why should you do it?, we discussed the main concepts of automated testing and explained its main motivations for the product owner. In this article, we take a closer look at how us developers can benefit from test automation.

Whether you are building a new application from scratch or adding functionality to an established codebase, it’s essential to make sure that the codebase is a coherent whole. This can range from stylistic choices to organising code at the actual user interface level.

Introducing more and more automated tests is not always a net positive, though. Every test adds a few seconds or minutes to the overall runtime and, at worst, flaky tests can prevent perfectly good code from being deployed.

In a later post, I will focus more on what merits testing in my opinion and on how to choose the best tool for the job.

Automation helps us be human

It cannot be overstated how much software development is about human interaction. In my experience, automating as much as possible leaves more space for meaningful discussions and fosters a more positive work environment.

When you are trying to write a report and keep getting into arguments about the Oxford comma instead of the subject matter, it feels really unappreciative and unhelpful. If this happens day in and day out, eventually this style of communication will undermine trust and make people less inclined to ask for help or opinions. This is what code reviews that focus on code style rather than functionality feel like.

A readme document explaining the desired code style is helpful. Even better, the project could have tools for making sure that all code conforms to the same style. Prettier and other opinionated formatters can shave off an entire layer of needless debate, and comprehensive linting rules can catch a bunch more.

To expand on this, the existing features and functionality are effectively tacit knowledge for developers unfamiliar with some part of the application. If the new developer adds a small feature and checks that everything still seems to work after the change, it’s really deflating to hear that they’ve “broken” something. Proper regression tests could have helped the developer feel in control and fix the regression before putting the code up for review.

There are always things that fall outside the tools’ capabilities, but it is a tremendous time-saver to have the basics covered without any human intervention.

Below, I give some more reasons why I believe that automated tests make developers’ lives easier.

Make sure that the features work – and keep working

Confidence. Developers and application publishers want to feel confident that what they are putting out into the world is of high quality and has as few bugs as possible. When everyone in the team is busy and you just need to get the new version out, you will feel much more confident when the tests are showing up green.

Professionalism. Automated testing is the way to make sure that no critical part of the app is forgotten in the pre-release testing phase. Humans are fallible creatures and, no matter how important something is, we tend to skip some steps if we can’t see the necessity.

If you have quick-release skewers on your bicycle, your bike came with instructions to check the skewers every single time before getting on. This is obviously super important for safety, but the human psyche ignores the importance out of habit: nothing bad ever happened before.

Test automation can alleviate this by outsourcing the tedious tasks to computers, leading to fewer regressions and better overall quality.

Focus on what matters

Tests as a prerequisite for merging. Contemporary software development is done in branches, in which each developer (or pair, etc.) in effect works independently of all concurrent work. Finished code is reviewed and merged into the main development branch. 

It is very helpful to have a CI setup that automatically runs all the relevant tests on each branch, because then both the author and the reviewer get to know the status without any manual steps. This means that any regressions caught by automation are plainly visible and can be fixed before the reviewer even takes a look at them.

When everyone agrees to write new tests for any new features and follow up on that as part of code reviews, the codebase will continue to be well tested throughout its lifetime.

Conscious decision to change existing features. The most insidious kind of bug is the regression. Regression means that, while we were developing one part of the application, it also made another, seemingly unrelated part of the app misbehave or break entirely.

Regressions are particularly frustrating for both developers and users: this used to work just fine, why doesn’t it work anymore? Luckily, automated testing can very often bring these issues to light before they end up on the users’ screens. When several tests fail due to a refactor, fixing them makes altering the specifications for those parts of the program a conscious decision for the developers. 

Commonly agreed rules can be enforced. Tools such as linters, code formatters, static analysers, etc. can be a tremendous help in keeping the code style consistent, avoiding common coding pitfalls and letting code reviews focus on the important things.

While, strictly speaking, these tools are independent of automated testing, they complement testing really nicely and are very easy to slot into the same CI process.

As more people work on a project, more personal opinions on minor things like syntax variants start to appear in the codebase. Automated checks can help maintain a ruleset everyone can agree with.

Clarify intent

Critical paths are well defined. The most important user flows in an application are called critical paths. These are the flows that need to work even if nothing else does.

In a web store, this could include adding items to your cart and checking out. On a messaging app, it would mean choosing a recipient and sending a message, as well as probably receiving one.

Critical paths are often not documented at all, or are documented in the wrong place where the development team doesn’t get reminded of them on a daily basis. Automated tests can help with this too.

An end-to-end test that is easy to follow can very effectively define a critical path for the team, and also make sure that it keeps working. And again, if new steps are added or old steps are removed, having to change the test to match the new flow makes changing a fundamental part of the application a very conscious decision.

The way a feature is designed to work is reaffirmed in tests. Code is written for humans to read. It has great communicative power in and of itself, and the right amount of comments can help clarify the less obvious parts.

A great side benefit of automated tests is that they also explain how the authors intended the feature to work in the first place. It’s one thing to have logic for a form that hides fields based on previous responses and another to have tests for making sure that question 2.a. is only visible when the answer to question 2 is “Yes”.

Having the expected result defined in terms of tests clearly shows whether the hiding is done correctly and not the wrong way around by accident, for example. This is even more helpful in more complicated user interfaces, in which many different solutions can seem like valid options.

Value over time 

Easy onboarding and hand-offs. Long-lived software projects will always experience some rotation. People have shorter and longer vacations, they move to new projects, and they switch jobs. At the same time, new team members join in and get familiar with the codebase.

As professionals, great developers want to keep the codebase accessible to new people at all times. Automated tests make it far easier to pick up where other folks left off. They ensure that the recent changes do not cause unintended side-effects and clarify the intent of the existing feature code.

Managing technical debt. As software continues to evolve, old decisions and outdated libraries start to weigh on the speed and ease of development.

Unless managed on a regular basis, technical debt eventually becomes so bad that people start to deliberately avoid changing certain parts of the system. This, in turn, turns those parts of the codebase into black boxes that no one understands anymore.

Tests can help with both avoiding this situation and getting out of it. Tests provide a clear reference point for the implementation, letting us know whether the changes to the inner workings have been successful. For a black box, we can first write a comprehensive set of tests based on how it works currently, and then write a new implementation that also passes those same tests.

Conclusion

There are plenty of reasons to do automation testing for user-facing applications. 

Automation helps us focus our communication on the non-trivial and can make newcomers feel more comfortable with making changes to the codebase. Tests integrate nicely to CI/CD processes, so that they become a natural part of the team’s workflow.

Automated tests can also ensure that the way a feature was originally defined was intentional and even explain which user flows are considered critical in the application.

Now that we have a good baseline understanding of the benefits of automation, I will move on to more specific topics in future articles. In particular, I will explain why I think end-to-end testing is great to start with, but not the end-all solution, and how I approach choosing what to test.

Illustration: Midjourney

The post Getting objective results with automated UI testing appeared first on Qvik.

]]>
Typing in the context of dynamic languages 3: Defining custom types in Python https://qvik.com/news/typing-in-the-context-of-dynamic-languages-3-defining-custom-types-in-python/ Mon, 07 Nov 2022 07:05:48 +0000 https://qvik.com/?post_type=qvik_story&p=3366 How to use our own types to create new subtypes to make our typing even more precise, especially in terms of our business logic?

The post Typing in the context of dynamic languages 3: Defining custom types in Python appeared first on Qvik.

]]>
In the previous sections, I gave an overview on subtyping and how to best use it, including in the case of variance and generic types. We want to use the most precise types possible at all times, but in the case of generic types with contravariant parameters, this means using supertypes for those parameters.

Next we will use our own types to create new subtypes. Let’s start with defining new types.

The simplest form of defining a new type is making a type alias. But that is just a new name for a type and does not create a subtype.

Age = int
 
def birthday(a: Age) -> Age:
    return a+1
 
 
a: Age = 10
i: int = 10
 
birthday(a)

# this works even, showing that Age is not a subtype of int
birthday(i)

To actually create a subtype, we need to use typing.NewType. This requires us to add some “casts” into the new type (actually class instantiation):

from typing import NewType
 
Age = NewType('Age', int)
 
def birthday(a: Age) -> Age:
    return Age(a+1)
 
 
a: Age = Age(10)
i: int = 10

# an Age can be used as an int, its supertype 
reveal_type(a + i)

birthday(a)
birthday(i)  # this does not work

note: Revealed type is "builtins.int"
error: Argument 1 to "birthday" has incompatible type "int"
expected "Age"

Type aliases are meant to make it easier to write longer types, but NewType makes your type checker enforce that you don’t “compare apples to oranges” as we are taught in primary school, as you shouldn’t be able to use a temperature where you expect an age, for instance.

Defining new generics

You can use type aliases or NewTypes to define new generic types, or you can parametrize a new class. To illustrate this, we will use an analogy of executables for certain operating systems:

class OS:
    pass
 
class Desktop(OS):
    pass
 
class Mobile(OS):
    pass
 
class Windows(Desktop):
    pass
 
class Linux(Desktop):
    pass
 
class Android(Mobile):
    pass
 
class IOS(Mobile):
    pass

Starting from these definitions, we can define new types based on existing generics, for instance list:

Network = list[Desktop]

But we can also define our own new generic classes:

from typing import Generic, TypeVar
 
T = TypeVar('T')
 
class Executable(Generic[T]):
    pass

The latter example is actually flawed as there is nothing stopping us from creating an Executable[int]. Our generic type is not precise enough because by default, our type variable is only a subclass of object. To specify the supertype of a variable, we use the bound parameter of the TypeVar:

from typing import Generic, TypeVar
 
T = TypeVar('T', bound=OS)
 
class Executable(Generic[T]):
    pass

Finally, we can also specify that a type parameter is covariant or contravariant in our generic type. These use cases are of course a little more niche, so we will just leave them here without going any deeper.

Desk_co = TypeVar('Desk_co', bound=Desktop, covariant=True)
Mob_ct = TypeVar('Mob_ct', bound=Mobile, contravariant=True)
 
class Network(Generic[Desk_co]):
    pass
 
class Notifier(Generic[Mob_ct]):
    pass

Protocols

The last type of subtyping we will look at is completely unrelated to classes or explicit subtyping. It is a de facto subtyping, that we usually call “duck typing” (“If it walks like a duck and it quacks like a duck, then it must be a duck”), but is properly called structural subtyping: if an object has the methods given for a specific “interface”, what in Python is called a Protocol, then it is a subtype of this protocol.

from typing import Protocol
 
class Duck(Protocol):
    def walk(self) -> None:
        ...
    def quack(self) -> str:
        ...
 
# explicit implementation of the protocol
class Mallard(Duck):
    def walk(self) -> None:
        pass
    def quack(self) -> str:
        return "Quack"
 
# implicit implementation of the protocol
class Mandarin:
    def walk(self) -> None:
        pass
    def quack(self) -> str:
        return "Kwak"
    def fly(self) -> None:
        pass

As you can see from the last example, to be a subtype of the protocol, you need to implement at least the given methods, so any object implementing the proper methods with the proper types is automatically a subtype of the protocol.

This is a very powerful feature. Moreover, the protocols act as abstract base classes, so if you use those in your codebase, you might want to consider replacing them with protocols to reap the benefits.

My good practices

We have seen how subtyping can help the static type checker work efficiently for you and prove properties about your system. In conclusion, here is my summary of recommendations:

Be sure to understand your business domain well, as your types and their relationships should reflect it as well as possible, so that your type system will prove properties that matter.

Define new types as soon as they make sense. If your function works on usernames, create a username type instead of using str.

Think twice about subtyping relationships. Inheritance can still make sense sometimes, but you can also replace it with protocols and composition.

Read more about the typing possibilities that Python offers by going through the documentation and the PEP.

The post Typing in the context of dynamic languages 3: Defining custom types in Python appeared first on Qvik.

]]>
What is automated UI testing and why should you do it? https://qvik.com/news/what-is-automated-ui-testing-and-why-should-you-do-it/ Mon, 24 Oct 2022 06:24:50 +0000 https://qvik.com/?post_type=qvik_story&p=3506 Automating user interface testing can help you avoid bugs, make sure that existing features keep working, let the development team focus on what matters – and save money as a result.

The post What is automated UI testing and why should you do it? appeared first on Qvik.

]]>
Anyone who has tried cooking straight from a recipe knows how hard it is to write and read clear instructions for anything that includes dependencies. In software, there are few things that can be expressed as simply as in recipes.

There are many ways software developers manage complexity, but at the end of the day, there are always untraveled paths and unintended consequences in even medium-sized applications. While we as developers are experts in foreseeing the possible cases, we still can’t cover even a fraction of the possible routes in our heads. Instead, we rely on trying out the features and seeing if they work as expected.

Why can’t we just make sure that our app has no bugs?

Imagine sitting in a car at a street corner in Paris. Your task is to drive around and find out whether you can get back to that same street corner using any combination of the streets of Paris without a problem. Some streets are one-way, some intersections only let you turn right and so on. That’s all okay, as long as you don’t end up in an inescapable loop, or a one-way cul-de-sac.

It makes intuitive sense that traversing every combination of streets in Paris is impossible. There are an infinite number of ways to reach the starting point, including driving around a single block for a hundred years and then getting back.

Nonetheless, this task is a lot simpler than thoroughly testing software.

In software, we have control structures that effectively teleport you from one street to another, states of various shapes and sizes, network failures, and so on. Fortunately, there are several strategies to try and make sure that the system works, even if we cannot cover every conceivable case.

We can take a look at each intersection at a time to make sure that there is a way out. We can also pay special attention to crucial and/or complicated areas and choose specific routes through the city that are always guaranteed to work.

And since we work with software, instead of people having to do all this every time, we can write a small amount of code that checks the other code in one way or another. This is called automated testing.

What is automated testing in practice?

There are many kinds of testing that can be done for user-facing applications. Here are some of the most common measures for ensuring the quality of software, in no particular order:

  • Usability testing
  • Accessibility testing
  • Performance testing
  • Compilation – in statically typed languages, compilation is the first line of defence
  • Manual testing (during development, in code review or by QA people)
  • Static analysis and linting
  • Unit testing
  • Snapshot testing
  • Visual regression testing
  • Integration/Component testing
  • End-to-end (e2e) testing with mock and/or real backends

Some of these are automated tests by definition, others can be automated to some degree, and some cannot be automated at all. In this article, my arguments mostly relate to the three principal types of automated tests, namely unit testing, component testing and end-to-end testing.

I will talk about the different types of testing in more detail in a future article. But to make sure we are on the same page, let’s take a quick look at the three major types.

Three major types: Unit, component and end-to-end testing

Unit testing is possibly the most well known type of automated testing. It is about looking at the smallest meaningful entity and making sure that it works in isolation. This is the single intersection at a time strategy in the streets-of-Paris analogy.

Unit tests run very fast and are powerful for things that are generic in nature. They are a particularly good fit for business logic and data transformations.

Component testing is what it sounds like. It is done at the component level, from a simple button to a date picker or an infinite list component. It can include a visual test runner that shows the UI component like it was an independent “application” of its own. This is similar to looking at a bigger area of Paris and focusing our attention on the peripheries to verify that the traffic flows as it should.

In these tests, we set the component to a desired starting state and work from there. The tests check things such as “are the correct checkboxes checked by default?” or “does the Delete button call the removal endpoint with the correct parameters?” Component tests take longer to run than unit tests, but are still quite fast.

End-to-end testing is closest to how the user actually uses the application: the website or mobile app is brought up just like a regular user would see it, and the tests work by clicking buttons, typing text in text boxes, and so on. In the Paris example, this would mean choosing the specific important routes that should be verified each time something changes.

End-to-end, or e2e testing is a bit hazy in terms of “how end-to-end” we are actually testing. In the shallowest sense, end-to-end can mean providing canned responses to particular requests (i.e. not requiring a network connection). E2e testing often includes at least a minimal version of the main backend service and can sometimes mean including everything from databases to authentication systems in the test run.

End-to-end testing is great for checking the important user flows but has its own drawbacks. Getting to a particular starting point can require a lot of work, and resetting the stage for the next test can be tricky. Furthermore, e2e tests are the most prone to “flakiness”, i.e. tests passing on occasion and sometimes not, and are by far the slowest, since the entire application (or even infrastructure) needs to be brought up before the test can run.

Why should you do automated testing for UI code?

While working on a feature, every front-end developer switches between looking at the code and checking the end result. Manual testing is effectively built into the natural workflow. 

However, relying on manual testing by developers is prone to regressions and is very rarely comprehensive. Developers try out the expected flows they just wrote out as code, meaning that uncommon user flows are almost never tested during development. It requires a sort of a context switch to go from “did I achieve what I was going for?” to “what could go wrong with this piece of code?”

I like to think about testing in terms of return on investment (ROI) and posit that many people find the return on investment in UI testing to be quite poor. And for sure, there are many ways to write less-than-useful tests but, in my experience, most of the time this happens because the tests were not really thought through.

Automated UI testing helps you save money 

There are definite trade-offs involved in writing tests, and I will dive deeper into these topics in a later post. But for now, let’s think about what we can gain by automating our tests.

Bugs are bad PR. Depending on the type of bug and its persistence, an issue in your application could cause anything from mild inconvenience and annoyance to loss of trust and missed sales. Making an upfront investment in proper testing and quality assurance can prevent the vast majority of bugs making it into releases.

Catching issues early saves tons of time and effort. According to an old adage, fixing an issue in a software project costs 1× in the concepting phase, 10× in the design phase, 100× in the development phase, 1,000× in the quality assurance phase, and 10,000× in the production phase.

The earlier an issue is detected, the easier it is to work around and avoid. When a bug is found in production, it interferes with regular development and the eagerness to resolve the issue quickly may cause a vicious cycle where the intended fix causes a new issue, and now there are two potential bugs users might face. Regression testing can help avoid such cycles, and writing tests for the intended fix may reveal a better way to fix the issue.

Fewer hours spent on manual testing. The main goal of automation is to have humans spend less time doing repetitive tasks. Machines are just better at that. Humans get bored, miss steps and forget things. Less time spent doing menial tasks, or especially forgetting to do them and then fixing bugs, amounts to more time spent on feature development.

Conclusion

In general, it’s not possible to be absolutely certain that software will work as intended. This is especially true for user-facing applications in which unexpected inputs, network issues, and other hard-to-predict events are common.

Automated testing is an important part of how we make sure that the features are doing what we want them to, and that continued development doesn’t inadvertently change the existing functionality.

In the next post, titled Getting objective results with automated UI testing, I will talk more about the benefits for the development team.

Illustration is made with Midjourney.

Want to hear more about UI testing? Join our Pizza & Beers event at Qvik office on Wednesday, November 2, 2022 at 5:00 PM, and you will.

The post What is automated UI testing and why should you do it? appeared first on Qvik.

]]>
Typing in the context of dynamic languages 2: Variance in Python https://qvik.com/news/typing-in-the-context-of-dynamic-languages-2-variance-in-python/ Thu, 13 Oct 2022 06:15:25 +0000 https://qvik.com/?post_type=qvik_story&p=3359 In my previous article, we saw why and how to express types in Python for static analysis. We also briefly illustrated the reasons for using the most precise types possible to help the type checker. In the following parts, we will focus on other types of subtyping that are not based on unions or class hierarchy.

The post Typing in the context of dynamic languages 2: Variance in Python appeared first on Qvik.

]]>
You can check out the previous article here, but now let’s get started with generic types.

Generic types are a form of subtyping with its own intricacies. A generic type can be thought of as a function on types: it takes a type as an argument to create a new type. Here is an example to clarify the concept:

l: list[int] = [1, 2]

The list type is a generic type: it takes an argument (int) to create a new type (“a list of integers”). We usually note type parameters with a T (for “type” of course), so we would talk about the generic type list[T].
There can of course be multiple type parameters, such as one type for keys and one for values in dictionaries: dict[K, V]:

d: dict[str, int] = {'foo': 42}

And we can also define custom generic types: 

from typing import Generic, TypeVar
 
T = TypeVar('T')
 
def generic_function(v: T) -> T:
    return v
 
class GenericClass(Generic[T]):
    # we can use the type T for both class attributes
    # and methods
 
    y: T
 
    def __init__(self, x: T):
        self.x = x

This example only shows the syntax without giving a real usable class, but we will address this in the next sections.

Invariance

Variance is an important property of generic types that a lot of developers may not be aware of. It describes how generic types behave with regard to subtyping. We will use a list as an example.

The question here is whether a list[int | str] is or is not a subtype of list[int], and vice versa. Let’s explore this with code.

l: list[int | str] = [1, 'foo']
 
# with an int list we can get the first element and add 1 
# safely
l.get(0, 0) + 1
# does not type because we might get a string so
# list[int | str] is not a subtype of list[int]
 
l: list[int] = [1, 2]
# with a list[int | str] we could append a string
l.append("foo")
# this does not type so list[int] is not a subtype
# of list[int | str]

So as we can see, even though int is a subtype of int | str, there is no subtyping relationship between list[int] and list[int | str]. In this case, we say that list is invariant in its type parameter.

This is important when it comes to using precise types: in good, precise typing, you cannot replace the type parameter of a list with a subtype or a supertype. It also means that if you want flexibility, you might want to consider a more “flexible” generic type, such as Sequence.

Covariance

Indeed, the Sequence type is what we call covariant in its type parameter, meaning that, for instance Sequence[int] is a subtype of Sequence[int | str]. If we can iterate on a list of integers or strings, we can safely iterate on just integers. It is very intuitive, but here is an example for good measure:

from collections.abc import Sequence
 
def f(s: Sequence[int | str]) -> None:
    for v in s:
        print(v*2)
 
f([1, 2, 3])  # all good !

We can define our own covariant types by specifying it in the TypeVar:

from collections.abc import Sequence
from typing import Generic, TypeVar
 
T = TypeVar('T', covariant=True)
 
class Grower(Generic[T]):
 
    def __init__(self, v: T):
        self.v = v
        self.t: list[T] = []
 
    def get(self) -> Sequence[T]:
        self.t = [self.v, *self.t]
        return self.t
 
def f(g: Grower[int | str]) -> None:
    for _ in range(5):
        for v in g.get():
            print(v*2, end=',')
        print()
 
f(Grower(1))  # all good !

As a rule of thumb, your generic will be covariant in a type parameter if that parameter appears only in covariant positions, except for __init__. One important covariant position is the return type of functions and methods. Therefore:

def f() -> int: ...
# has a more precise type than
def f() -> Number: ...

Contravariance

When the type parameter appears, for instance, in the parameters of a function, the opposite actually happens. This is called contravariance. Contravariant types are less common than covariant types. Roughly, all covariant types are derivatives of having a type parameter in function argument, or at least the type parameter is consumed rather than produced. So let’s take a deeper look at this function example.

Let’s go back to our previous example of the next function:

basenumber = int | float
i: int = 1
 
def next(n: int) -> basenumber:
    return n + 1
 
next(i)

The int type is a subtype of basenumber. Contravariance means that, unlike in covariance, using basenumber instead of int in the type parameter will result in a type Callable[[basenumber], basenumber] that is a subtype of Callable[[int], basenumber].

basenumber = int | float
i: int = 1
 
def next(n: basenumber) -> basenumber:
    return n + 1
 
# this is still valid of course, meaning that we can use
# Callable[[basenumber], basenumber] wherever we can have
# a Callable[[int], basenumber], meaning it is a subtype
next(i)

So to summarise this example, we had a type Callable[[int], basenumber]. Thanks to covariance, we have a first subtype Callable[[int], int], and thanks to contravariance we have a second subtype Callable[[basenumber], basenumber].

Those two subtypes cannot be compared to one another. They are both just as precise, and the choice of which one to use depends on your intention, but both are better options than the original.

In my next article, we will focus on defining custom types, so stay tuned!

The post Typing in the context of dynamic languages 2: Variance in Python appeared first on Qvik.

]]>
Typing in the context of dynamic languages 1: Types and subtypes in Python https://qvik.com/news/typing-in-the-context-of-dynamic-languages-1-types-and-subtypes-in-python/ Mon, 26 Sep 2022 11:09:06 +0000 https://qvik.com/?post_type=qvik_story&p=3349 In this article, we will discuss adding static typing on top of dynamically typed languages by looking at the case of Python. Of course, most of the ideas proposed here apply equally to Typescript, PHP, you name it. I only chose Python because that is what I’m familiar with.

The post Typing in the context of dynamic languages 1: Types and subtypes in Python appeared first on Qvik.

]]>
In the second article of this series we will talk about variance in Python, and in the third article we will go deeper into defining custom types. But first, let’s focus on types and subtypes in Python.

Typing is a topic on which there are a lot of contradictory definitions and information out there, so for the sake of clarity, I will start with a couple of definitions that we will use in this article. If your definitions are a little different, these concepts will still probably be valid with a little adaptation.

The two concepts usually open to discussion are strong vs weak typing and dynamic vs static typing. In this article we talk about static typing when the type-checking happens at compile time, and dynamic typing when it happens at execution time. On the other hand, weak typing means that a type error can lead to a cast, whereas in strong typing, a type error is just an error and will stop compilation or execution.

It is important to understand that dynamically typed languages can be strongly typed (as Python) or weakly typed (as Javascript). In python, a line such as 1 + "1" will result in an error, whereas Javascript will use casts to evaluate it as 2. In that context, adding static typing on top of the language will avoid different types of bugs.

In Python, it will mostly avoid having exceptions raised at execution time, and in Javascript it will mostly avoid wrong values from being computed. But you can reap both benefits.

One last concept to keep in the back of our minds is that type systems have different properties, one of which is completeness. When a type system does not have completeness, it means that some expressions don’t have a type that can be inferred for them.

Unfortunately, when a static type system is added on top of a dynamic language, such expressions occur quite commonly, meaning that you can never get the full safety of some languages that are built with static typing from the ground up. You will need to help the type checker by telling it some of the types, but as a human, you can make wrong assumptions.

But still, although it is not perfect and you will still need to rely on dynamic type checking as well, adding static typing on top will be the icing on the cake, and may be a gateway drug to more complete type systems, like in Rust, C# or Haskell.

How to use types in Python

With this short introduction to typing out of the way, let’s focus on typing in Python.

Essentially, in Python, everything is an object or a type. Indeed, types are first-class objects and often also act as functions. Let’s look at an example:

l = [1, 2, 3]
print(type(l))
print(type(l)())

<class 'list'>
[]

As you can see, we can get the type of a list, and use it as a constructor to create a new list, so a type is indeed a first-class object that could be stored in a variable and manipulated at runtime. What about its type?

print(type(type(l)))
print(type(type(type(l))))

<class 'type'>
<class 'type'>

So the type of list is type, whose type is itself type. Python has this concept of metaclasses that we will not go any deeper into in this article. Let’s just remember that all types have a type themselves, which derives from type.

The dynamic typing part of Python, which is the only one really built into the language, uses this mechanism to tell the type of objects at runtime and check the validity of some operations. However, this information is not available for static typing outside of execution.

Python has also introduced type annotations that do nothing in themselves but can be used by external tools both statically and dynamically. We will assume a recent version of Python (3.10+). For older versions, you will need to import things from the typing module (List instead of list, Union instead of |). If you use an older version than that, you should probably consider updating anyway!

# We specify the types of the arguments and the return value of the function
# For the *k and **kw arguments we do not need to specify we have a list or a
# string dictionary
def create_dict(value: int, *keys: str, **extra: int) -> dict[str, int]:
    """Create a dictionary of string keys and integer values"""
    # we can specify the types of variables too
    d: dict[str, int] = {key: value for key in keys} | extra
    return d

Again, adding these annotations doesn’t do much in itself, and you will need to get your code through third-party tools like Mypy or Pyright to perform static type analysis.

Subtyping in Python

One important aspect of code analysis is subtyping, meaning which types are included in another type. To illustrate this, we will need to use a library.

import numbers
 
isinstance("1", numbers.Number)  # False
isinstance(1, numbers.Number)  # True
isinstance(1.5, numbers.Number)  # True
isinstance(1.5 + 5j, numbers.Number)  # True
# (how awesome is it that Python has built-in complex numbers ?)

As we can see, int, float and complex are all subtypes of numbers.Number: every integer, float or complex number is also a Number.

This can happen in several situations:

  • We have a class hierarchy, in which case the derived classes are subtypes of the base classes.
  • We have defined a new type that is a union of several types, such as None. In this case, basenum is the supertype and int and float the subtypes.
  • We are using generic types, which we will discuss in more detail in the section on the next article on variance.
  • Structural subtyping, aka “duck typing”, which we will discuss in the part about protocols in an upcoming article.
  • We have defined new types and declared them as subtypes of another type, which we will cover also in the same article.

Let’s remember that, although all of the above can be checked statically, no static checker can check for behavioural subtyping. It means that any property of a supertype must be true for all the subtypes, which is better known as Barbara Liskov’s “substitution principle”. That’s why we developers need to be careful.

What do I call precise typing

Writing good types demands some experience and discipline. It is very easy, when faced with a difficulty, to just use typing.Any without giving it a second thought, but then we won’t gain much from type checking. It is especially difficult to use types on values from libraries over which we have no control.

The type checker does some inference (guessing types), but to gain the most benefits, you need to give it precise instructions about what you expect. If you do not specify the type that you expect a function to return, the type checker can guess what will be returned, but not whether or not it was expected.

The more precise the information, the better the type checking will be. Let’s look at the following example:

basenumber = int | float
i: basenumber = 1
 
def next(n: int) -> basenumber:
    return n + 1
 
next(i)


When we run Mypy on this, we get the following:

foo.py:7: error: Argument 1 to "next" has incompatible type "Union[int, float]"; expected "int"
Found 1 error in 1 file (checked 1 source file)

Of course, we have not given the most precise type possible to i, meaning the “deeper” subtype. This would be a much better example, although still not perfect:

basenumber = int | float
i: int = 1
 
def next(n: int) -> basenumber:
    return n + 1
 
next(i)

Now Mypy will not complain anymore: by declaring that i is an integer, a subtype of basenumber, we have given it a more precise type. We can already see that our next function could work with floats, so we have not yet given it its most precise type.

To do so, in next week’s article we will talk about variance, a property of generic types that express how subtyping relations relate to each other. Stay tuned!

The post Typing in the context of dynamic languages 1: Types and subtypes in Python appeared first on Qvik.

]]>