Finally, Apple Music cadence playlists for running

I took about a month off between my last job and my current one to ostensibly do some work around the house but also to wrap up work on my running music app that I’ve been working on; Runtracks.

The inspiration was simple: when a track comes on while you’re running and it perfectly matches your step cadence, it feels great! Why can’t we do this for every track in your running playlist?

Runtracks is a set of curated tracks from Apple Music that are perfect for running, combined with software that adjusts the beat of the music to match your run cadence.

And honestly, it’s great to use. I have been running mostly on a treadmill since covid began and being able to dial in a speed and cadence and have music to go along with it just feels great. More recently I have started running outside again and the experience is just OK - hills will make you speed up or slow down just enough to get out of sync - but that’s nothing a few more features can’t fix.

Right now it’s 100% free to use, other than having to be an Apple Music subscriber already. Like any good app, the software is only half the story and regular content updates is really what makes it continually valuable. As a few features solidify and I feel more comfortable focusing more on content and less on the core functionality, I’ll probably add a very cheap subscription option to keep the content flowing.

If you’re a runner (or if you’re not and want to be!), download Runtracks on the app store and give it a shot. As always, shoot me some feedback if you have ideas for features or if you had a good run.

Home Temperature Logging with homebridge, Influxdb and Grafana

We recently were able to buy our first house (!!! it does still seem a bit surreal) and a flood of projects that I’ve never quite been able to commit to in a rental have been added to my todo list.

One of those was setting up historical temperature charts for indoor spaces, and in general just building out some fun homekit integrations without shelling out lots of $$$ for expensive sensors. You can definitely achieve this without homekit and homebridge in the middle if you don’t care about that part but the homekit plugins do provide some plumbing to connect bluetooth to mqtt to influxdb with only light configuration.

This is the culmination of a twitter thread I started a while back:

sensor choice

I started with a few homekit native temperature sensors, the cleargrass CGG1 model, which were expensive but very easy to connect directly to homekit. Unfortunately there’s no way to get data out of homekit, so to plot the values over time you need an intermediary to fetch the sensor data over bluetooth and then you can fake a new accessory that homekit can display, hence the connection through homebridge.

All of the common sensor models I looked at have some sort of encryption around the data they transmit, so you have to get the “bindkey” through various semi-hacky gist paste methods. It seemed like other folks were able to decrypt the CGG1 bindkey using fake android apps or syncing their hardware with some cloud service and then fetching it via an API, but none of those methods ended up working for me and the CGG1.

That rabbit hole lead me to another sensor that was significantly cheaper because it had no native homekit integration (which I didn’t want now anyway) and a slightly smaller screen: the Xiaomi Mijia LYWSD03MMC. Rather than $30 per sensor, these could be purchased for as low as $5 each in packs of four!

Even better, the LYWSD03MMC seemed like it had some of the best tooling for installing custom firmware which removed the data encryption and added some extra features. I purchased two to get started.

bluetooth hardware

Before I get into how everything connects together, a short interlude on bluetooth on Ubuntu. It’s awful and I spent too much time fighting it versus just doing the thing I wanted to accomplish.

Or at least the native chipset in the little M1T hardware I’m using sucks. Lots of people report success with using bluetooth on Raspberry Pi models, which is a common platform for homebridge installations. You can see my whole journey in the twitter thread, but the short version is that the bluetooth device would disappear entirely after 6-to-12 hours and no amount of sudo hciconfig hci0 reset would fix it. Or any other bluetooth incantation short of a system restart for that matter.

I ended up getting a tiny bluetooth dongle from TP Link, their UB400 model, which a) was plug-and-play on linux, if you can believe it b) had significantly better range than internal bluetooth and c) didn’t constantly disappear from the machine.

Don’t fight flaky bluetooth chipsets on linux. Just get a cheap dongle that is well supported.

reflashing the sensors

Not nearly as scary as reflashing devices used to be. Here is a web reflasher (yes, really!) for these devices. You have to enable the#enable-experimental-web-platform-features flag in Chrome, instructions for that are on the page.

The UI is not great here but it’s a fairly simple process and you can do it on any machine with bluetooth, not just the homebridge server.

  • Download the firmware from the “Custom firmware repo” link on that page, it’s a ATC_Thermometer.bin file
  • Enter the BLE device name prefix so you don’t see every bluetooth device nearby. If stock firmware, use LYWSD03 but after you flash it, it will appear as ATC (the name of the firmware) instead
  • Click Connect
  • After a few seconds you should see the device pop up in the bluetooth pairing window of Chrome. Select it and Pair
  • The log at the bottom of the window will tell you when it’s connected
  • Click Do Activation when it’s connected
    • You can ignore the token and bindkey, we’ll be disabling it with the new firmware
    • If the MAC address like A4:C1:38:B7:CB:10 shows up in the Device known id field, note it somewhere but this was hit or miss for me and we can get the MAC later as well
  • Select the previously downloaded firmware file at the top of the page under Select Firmware and click Start Flashing, it’ll take 20 seconds or so to finish up
  • Once it restarts, customize with the controls in the middle section of the page to your liking, I selected:
    • Smiley: Off
    • Advertising: Mi-like
    • Sensor Display: In F˚
    • Show Battery: Enabled
    • Advertising Interval: 1 min
  • After selecting each of these, the sensor will update with the new setting immediately. You MUST click Save current settings to flash to persist your settings between restarts
  • If you didn’t get the MAC from the earlier step, simply remove the battery and pop it back in to restart the sensor. The new firmware ensures that while booting the humidity digits will read out the last three bytes from the MAC address, the first three are always A4:C1:38

connecting the dots

Now it’s “just” a matter of stringing all the components together. Here is a list of the bits and pieces that are connected:

Locally:

Somewhere (local or remote):

  • InfluxDB
  • Grafana

I’m opting not to run Influx and Grafana myself because the free cloud offerings are a good start. Grafana is really just a frontend to influx data and thus doesn’t even need to be running most of the time, so Heroku is a good option if you want to run it yourself (tailscale even offers a nice way to spin one up that lives on your tailscale network). The limitation on the free offering is 30 days data retention on influx and the next tier is usage-based, which I imagine would be reasonable for the amount of data we’re throwing at it.

Once you have influx set up, you can configure the next step: sending data to influx with telegraf. Install telegraf using the standard instructions.

[[inputs.mqtt_consumer]]
servers = ["tcp://localhost:1883"]
topics = [
  "sensors/#",
]
data_format = "csv"
csv_header_row_count = 0
csv_skip_columns = 0
csv_column_names = ["value"]
csv_column_types = ["float"]
csv_delimiter = " "

[[outputs.influxdb_v2]]
urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
# an auth token from influxdb cloud, Load Data -> API Tokens -> Generate API Token
token = "XXXXX"
# your email or org id from your influxdb account
organization = "email@example.com"
# your bucket name, create this on influx first, Load Data -> Buckets -> Create Bucket
bucket = "homebucket"

Create a config file like this in /etc/telegraf/telegraf.d/, naming it something like mqtt-to-influxv2.conf. You can throw it all in a single top level .conf file too but it’s nice to be organized. Restart telegraf. Telegraf will now forward from mqtt to your influxdb instance.

Note the topics section. We’ll be organizing our mqtt topics to look like sensors/garage/temperature so this tells telegraf to forward everything that starts with sensors/.

Next step: forwarding messages via mqtt. Install mosquitto (mqtt) and a client in case you need to test the output. Generally you can do sudo apt-get install mosquitto mosquitto-clients or the mqtt github readme for other platform instructions.

If you need to test that mqtt messages are being sent, you can use mosquitto_sub -h 127.0.0.1 -t sensors/# -v and it will display messages as they arrive. No other mqtt config is required.

Next step: sending messages from your bluetooth devices to mqtt.

I won’t get into installing homebridge, but I highly suggest you add homebridge-ui to manage your instance. That way you can pop this url for the mi temperature sensor into the plugin search and install it easily.

Once it’s loaded into homebridge, use homebridge-ui to configure the plugin by using the Settings link on the plugin page. It should have your first accessory already created and you should fill in these values:

  • Device MAC address: the MAC address we got while flashing the device
  • Expand the MQTT section
    • Broker URL: mqtt://localhost:1883
    • Topics: use a format like sensors/garage/temperature as we discussed above, and name your temperature, humidity and battery topics distinct names
  • Save and restart homebridge

Click Add Accessory to duplicate these fields for another sensor, so you can add as many as you like, but be sure to change the MAC and mqtt topics at a minimum.

If you’re you’re not down with homebridge-ui and are writing your homebridge config by hand, use the plugin docs to figure out which json config keys to use for the same items above.

I cribbed a lot of the setup from homekit to mqtt to telegraf from this reddit post on building a homebridge to grafana pipeline, updating it for the influxdb_v2 output. I think the order of operations is weird in that post but the config steps do work out in the end.

That’s it! Your sensors should be publishing data to mqtt, which is passing it to telegraf, which is adding it to your influxdb instance.

graphing your data

The last step is exploring your data on influx and grafana to configure a dashboard. This is subjective depending on what you want to see, so you can play around with it as you see fit. Some guidance to get started though:

Queries are arguably better to design in influx since you can more easily browse what data is available using the Data Explorer tool. You can click through your bucket’s data to roughly select the data you’re looking for, then click Script Editor to get a query for that data which can be pasted into a grafana panel.

For example, here’s a query from one of my temperature panels:

import "math"

convertCtoF = (tables=<-) => 
  tables
    |> map(fn: (r) => ({
        r with
        _value: (r._value * 1.8) + 32.0
      })
    )

from(bucket: "homebucket")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "mqtt_consumer")
  |> filter(fn: (r) => r["topic"] == "sensors/garage/temperature")
  |> convertCtoF()
  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
  |> yield(name: "mean")

The bottom part from(bucket: "homebucket"... is based on a query I created in the influx data explorer and then I added a quick conversion step from C˚ to F˚. This is influx’s flux query language which is not always easy to understand but between the reference docs and asking questions on their support forum, you can probably come up with what you want to do.

grafana graphs for two temperature sensors

Once you have the query set, you can continue to customize by adding a title, soft min and max values and, most importantly, your units.

additional sensors?

What else? Got other sensors that can be read over bluetooth and forwarded to influx? Random other things that can publish mqtt topics that are not homekit related? Now that you have the basics for an influx pipeline on your homebridge server, the possibilities are endless.

Questions? Did you extend this in fun ways? Let me know on Twitter.

Next time: let’s use our existing infrastructure to do home network monitoring!

Developer Documentation Is Awful

I’m working through a rewrite of the 5 Calls site to static publishing via hugo (same as here) but with the addition of dynamic content via small React components. I don’t see this approach in a lot of places but so far I think it’s very effective, I’ll talk about the specifics at some point in the future.

Because I’m not just building features into an existing architecture, I’m doing a lot of experimenting with how to accomplish what I want in a minimal, clean way. Some specifics about why that process is difficult for programmers follow…

Although I have a pretty good sense of React after running the regular 5 Calls site as a SPA for a few years and dealing with React Native both at my regular job and for 5 Calls’ relational app, Voter Network, I still run into situations where I want to accomplish a specific thing in React and just have no way to figure out what is the right way to do it.

This process isn’t about writing the code, it’s about understanding the purposes and limitations of the framework which is… 90% of programming. So we turn to Google to try to figure this stuff out and that doesn’t always work well:

It struck me today how incredibly low-tech this is. One side of me appreciates the community aspect of it; most cases bring you to a stack overflow question (low quality questions excepted) or a blog post on some engineer’s website which can be incredibly informative. But the other side of me wonders why there is so much room for instructions on how to accomplish things.

I’ve returned to this a few times recently as I’ve been evaluating projects that I’m working on. We get hung up on “I’m an engineer that knows Swift” or whatever language when that’s barely 10% of the actual work for most engineering jobs. I am relieved when I hit part of a project that only requires writing logic in language x because it’s so straightforward, even for languages that I’m not super familiar with.

Unless you’re at unspecified fruit company writing actual subsystems, you’re mostly working within frameworks and libraries that you’ve decided to use and architecting your code around how those parts work is vastly more difficult than structuring the code itself.

With that in mind, why is documentation so insufficient? Even for a large, well maintained project like React it’s difficult to find out what the right pattern is for what your code is trying to do. High-level examples are exceedingly hard to find, particularly when they do something outside of the norm.

No grand resolutions on how to deal with this. Just thoughts on what’s broken for now.

Custom Homebridge Plugin for Garage Homekit

Funny story, a few weeks ago I locked myself out because technology. I left the house via the garage to see some neighborhood commotion and realized when I came back that I had been hoodwinked by my own code.

You see, I typically let myself in via a custom developer-signed app that travels out over the internet, back in to the house via a reverse proxy and then triggers an Arduino+relay connected to the door opener. It’s got… a few single points of failure. But it has been quite reliable until that week when I left the house without checking the app first. Developer certificates for apps only last until your current membership expires (at most a year if you installed an app on the day you renewed your membership) and mine had renewed since the last time I used the app - one of the secret perils of extended work-from-home I guess.

But everything worked out and I was able to get back in relatively quickly (quoth @bradfitz: “luckily you have a friend with backdoor access to your home network”) but it prompted me to tackle a project I had been putting off for a while; migrating from a custom app to a custom homebridge plugin.

HomeKit is by far more optimal for this use case: I can ask Siri to trigger it without writing my own Siri intents (which I did for the original app - except HomeKit has a monopoly on asking Siri to open the garage so I had to configure it for “hey Siri open the thing”), the user interface is built-in to the HomeKit app and won’t expire periodically, and I can rely on HomeKit Apple TV home hub rather than a reverse proxy. Less stuff I have to maintain or debug, and the only way I can be truly locked out is if the power is shut off.

getting started

As is customary, the actual code to wire all this stuff up is trivial but understanding the concepts behind the homebridge API is not.

I already had homebridge set up and configured for another project so I focused on how I could create a custom plugin for homebridge and connect it to my existing installation. I started by forking this example plugin project for homebridge: https://github.com/homebridge/homebridge-plugin-template

The installation instructions were great and I had the plugin showing up in homebridge-ui immediately.

Here’s where things start to get tricky: HomeKit garage door support is built with the idea that there’s a sensor that can detect if the garage door is open or closed. This isn’t typically something a non-smart garage door can tell. It’s got a toggle that opens, closes or stops movement from the garage door and your eyes and brain are the indicator that the door has completed opening or closing.

If you look at the Homebridge Garage Door service API docs, you’ll note that it handles a few different states. There is no “toggle garage door” command, but there are triggers for setting the CurrentDoorState and TargetDoorState. In an ideal world we’d trigger the garage door toggle, set TargetDoorState to open, wait for the garage to open and then set CurrentDoorState to open.

Next time:

How to structure your homebridge plugin, and trying things the hard way…

New Zealand Flax Pods

Earlier this year I noticed one of the bushes in the backyard was sending off a bunch of flowers, more than I’ve ever seen on this one bush for sure, and now they’ve fulled developed into seed pods. These were impressive even when they were pre-bloom, they’re probably 8 feet tall and there are something like 10 flowers per stalk over seven stalks that the plant produced this year.

I thought these were super fascinating so I grabbed a few pictures. Turns out these are a variety of Phormium, or New Zealand flax, with bright pink stripes along the side of the broad leaves.

Seeds from New Zealand flax bush

Putting on my very unofficial botanist hat, the pods most likely open up and let their seeds out when they’re still quite high above the ground. The seeds, inside their disk-shaped hulls, then catch the wind, spreading farther than they would if they just dropped directly down.

openjdk cannot be opened because the developer cannot be verified when installing adb via brew

openjdk cannot be opened because the developer cannot be verified when installing adb via brew

If you’re like me and enjoy the simplicity of installing command line tools using the brew command on macOS, you’ve likely run into one or two cases where Catalina prevents you from running a tool that’s been installed because it hasn’t been verified.

In this case, I’m installing the android developer tools for React Native development and needed both adb and openjdk. I’ve used both of these commands to install them:

  • brew cask install android-platform-tools
  • brew cask install java

This situation is similar to downloading a new Mac app from any developer online. Some developers want to distribute apps without the restrictions placed on them by Apple and can run unsigned code - with some restrictions.

The Solution

The issue is that macOS labels all downloaded binaries with a “quarantine” attribute which tells the system that it should not be run automatically before being explicitly approved by the user.

If you’re installing an app, the sneaky way to allow opening unsigned code is to use Right Click -> Open rather than double clicking on the app icon itself. That’ll allow you to approve removing the quarantine and you can open with a double click next time.

This even works in some cases with command line tools: you can use the open some/path/to/a/folder from Terminal to open a folder in the finder that contains adb and then right click it to get the standard bypass quarantine prompt.

The JDK is more tricky since it’s a folder and not an application. You can’t just right click to launch it, instead you have to manually remove the quarantine attributes from the folder where it’s been downloaded. You can do this easily in the terminal with this command:

xattr -d com.apple.quarantine /Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk

The command line tool xattr is used for modifying or inspecting file attributes on macOS. The -d flag removes an attribute, com.apple.quarantine is the quarantine attribute for unsigned code we discussed earlier and the final argument is the path to the file. Your jdk might have a different version or a different tool might be in an entirely different location.


As usual, quarantine is there to protect your computer from unsigned software. Please ensure you trust the developer you’re running unsigned code from before opening it on your machine.

React Native, Typescript and VS Code: Unable to resolve module

I’ve run into this problem largely when setting up new projects, as I start to break out internal files into their own folders and the project has to start finding dependencies in new locations.

In my case, it was complaining about imports from internal paths like import ContactPermissions from 'app/components/screens/contactPermissions';.

The error message tries to help by giving you four methods for resolving the issue, which seem to work only in the most naive cases:

Reset the tool that watches files for changes on disk:

watchman watch-del-all

Rebuild the node_modules folder to make sure something wasn’t accidentally deleted

rf -rf node_modules && yarn install

Reset the yarn cache when starting the bundler

yarn start --reset-cache

Remove any temporary items from the metro bundler’s cache

rm -rf /tmp/metro-*

These cases might work for you if your problem is related to external dependencies that may have changed (maybe you changed your node_modules without re-running yarn or installed new packages without restarting the packager).

In the case with VS Code, this did not resolve my issues. I was still running into issues where modules could not be found.

The Solution

The problem here turned out to be related to VS Code’s typescript project helper. When I referenced existing types in my files, VS Code was automatically importing the file for me - this is usually very helpful!

But for whatever reason, the way my app is set up means that even though VS Code could tell where app/components/screens/* was located (an incorrect import path usually causes VS Code to report an error on that line), typescript had trouble determining where this file lived from this path. Even being more specific about the start of the path with ./app/components/... was not working for the typescript plugin.

What did work was using relative paths in my typescript files. So instead of referencing files from app/components/screens/contactPermssions, I would use ../components/screens/contactPermissions for a file that was located in a different subdirectory of app.

This can be difficult to do manually (remembering what path you’re in and how many directories to go back up, etc), but VS Code can also generate and change these imports for you if it’s configured to do so.

Navigate to your workspace settings, search for typescript import and change the Typescript Import Module Specifier to relative from auto.

Or, do this in your preference json:

"typescript.preferences.importModuleSpecifier": "relative"

FFmpeg exited with code 1, Homebridge and Homekit configuration with Axis camera

If you’re trying to use the homebridge-camera-ffmpeg plugin for homebridge to connect your IP camera to Homekit, you may have run into issues with ffmpeg exiting with code 1 when trying to stream. This usually means ffmpeg can’t launch with the options provided in your camera configuration, but many different things can be going wrong and it’s hard to debug.

[1/18/2020, 8:27:54 PM] [Camera-ffmpeg] Snapshot from Front door at 480x270
[1/18/2020, 8:27:56 PM] [Camera-ffmpeg] Start streaming video from Front door with 1280x720@299kBit
[1/18/2020, 8:27:56 PM] [Camera-ffmpeg] ERROR: FFmpeg exited with code 1

There are lots of ways this can go wrong, so here are some steps to figure out where you might be having issues.

The Solution

First, confirm ffmpeg is installed and runs on your homebridge server. Just run ffmpeg at the command line and confirm it runs. Here’s what running it successfully looks like:

ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
  configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu... etc

You may want to note the codecs that ffmpeg has been installed with. For my particular Axis camera, it was important to have h264 support so you’ll look for --enable-libx264.

Next, you need to make sure you have the right video and image source URLs for your axis camera. There are quite a few variations. Here is how the full configuration looks:

{
  "platform": "Camera-ffmpeg",
  "cameras": [
    {
      "name": "Front door",
      "videoConfig": {
        "source": "-rtsp_transport tcp -i rtsp://user:pass@1.2.3.4/axis-media/media.amp",
        "stillImageSource": "-i http://1.2.3.4/jpg/image.jpg?size=3",
        "maxStreams": 2,
        "maxWidth": 1280,
        "maxHeight": 960,
        "maxFPS": 30,
        "vcodec": "h264"
      }
    }
  ]
}

Both source and stillImageSource urls can be looked up on this axis endpoint chart. Note that you need to add a username and password in the URL if configured, and of course substitute your own camera IP in for 1.2.3.4.

Lastly, if you still can’t figure out what’s going wrong, enable debug mode for your homebridge-camera-ffmpeg source and get more information:

...
  "maxFPS": 30,
  "vcodec": "h264",
  "debug": true
...

This will give you more info about what the plugin sees from your camera and what the result of the ffmpeg call is when trying to fetch the stream. You should attempt to look at the video stream in Homekit to kick off the ffmpeg process.

xcodebuild: error: Could not resolve package dependencies with Fastlane and Swift Package Manager on CircleCI / Bitrise

The Problem

If you’re running tests on your iOS build CI pipeline with fastlane, you might run into an issue when running scan using Xcode 11+ if you’ve got some Swift package manager dependencies. The full error might look like this:

[18:44:50]: ------------------
[18:44:50]: --- Step: scan ---
[18:44:50]: ------------------
[18:44:50]: $ xcodebuild -showBuildSettings -workspace FiveCalls/FiveCalls.xcworkspace -scheme FiveCalls
[18:44:53]: Command timed out after 3 seconds on try 1 of 4, trying again with a 6 second timeout...
xcodebuild: error: Could not resolve package dependencies:
  An unknown error occurred. '/Users/vagrant/Library/Developer/Xcode/DerivedData/FiveCalls-gpqeanjdlasujldgqrgmnsakeaup/SourcePackages/repositories/Down-9f901d13' exists and is not an empty directory (-4)
xcodebuild: error: Could not resolve package dependencies:
  An unknown error occurred. could not find repository from '/Users/vagrant/Library/Developer/Xcode/DerivedData/FiveCalls-gpqeanjdlasujldgqrgmnsakeaup/SourcePackages/repositories/Down-9f901d13/' (-3)

Coming from this fastfile:

  desc "Runs all the tests"
  lane :test do
    scan(workspace: "MyProject.xcworkspace",
         scheme: "MySchemeName")
  end

The problem here is that Xcode is resolving package dependencies and the build system isn’t waiting for that process to complete. Usually this works fine locally, so something is off with the CI timing here.

The Solution

According to this issue on the fastlane github, the problem should be resolved by updating fastlane to 2.138.0+. That didn’t fully resolve the issue for me, and there’s another way to force updating dependencies before building.

You can force xcodebuild to resolve the dependencies in a separate step beforehand, and scan won’t run until this completes.

  desc "Runs all the tests"
  lane :test do
    Dir.chdir("../MyProject") do
      sh("xcodebuild","-resolvePackageDependencies")
    end
    scan(workspace: "MyProject.xcworkspace",
         scheme: "MySchemeName")
  end

In this example our fastfile is in a fastlane directory adjacent to our project directory, so to move from the fastlane directory to our project, we move up one directory and into our project directory (the one with our xcproject file). You may need to adjust this for your project setup.

Should you write your app in SwiftUI?

I’ve hit a few roadblocks when working on Read & Share and I’m working on building separate screens in isolation while I wait for improvements from the next Xcode and SwiftUI beta (maybe next week?) to really tie things together.

It’s frustrating to not be able to move forward on the whole app flow, and I will admit that once or twice I thought about rewriting the app without SwiftUI. But at the end of the day I’m making something fun for myself, I don’t have a huge deadline looming and I wanted to learn something new that I can use to prepare for the future of Swift.

Over the next few months as we hit iOS 13 release and beyond, more and more folks will be able to start using SwiftUI to develop new parts of existing apps or start apps from scratch and ask themselves if they should jump into SwiftUI - and for the pedants in the crowd, I’m using SwiftUI to mean both the SwiftUI and Combine frameworks.

Here are my thoughts from using SwiftUI for the last few months and if you should write your next app using SwiftUI:

Pros

It’s easy to get started with the basics. Apple has a really great set of tutorials for getting used to building UIs with SwiftUI and even interacting with UIKit components from SwitfUI.

If you want a taste of how developing in SwiftUI feels, these tutorials are great at walking through the logical steps of building one part of an app.

Developing your UI is significantly faster - even faster than using Storyboards! Between the visual previews that are provided on the tutorials and the speed at which you can preview your work in Xcode, this can significantly speed up the amount of time spent iterating on how your UI behaves.

Also, UI customization is not hidden in storyboards or nib configuration files. It’s all based in your SwiftUI views and not spread across multiple areas like it could be if you configured your views in nibs and code.

Refactoring UI is a simpler process. One of the great parts about SwiftUI is it’s easy to see when your view code is getting long and pull out subviews for refactoring. I’ve been noticing three distinct steps:

  1. Start building your UI in one View
  2. During active development, break out views that are complex or repeated into new Views in the same file
  3. Once the dust settles (or the new View grows in size), move these Views into their own files or groups

It’s totally reasonable to have multiple small View components in a single file, but once they start being used from multiple locations or have their own helper methods, it’s time for them to get their own file.

Lastly, let’s not forget the experience of learning something new. You’ll be learning something new, but with some of the Swifty comforts you’ve become used to. This is actually pretty fun! You can usually iterate quickly and solve your problems (as long as you don’t run into functionality blockers) like the ones that are common during the beta phase.

Cons

Starting with the obvious one: your SwiftUI apps will only work on devices with iOS 13 and higher. For those of you with a large existing install base, making everyone update to iOS 13 to get the latest updates might not be the best way to treat your users. Keep in mind older devices will still be able to get the latest version of your app available for iOS 12, but not any new updates that are iOS 13-only.

For new apps, particularly ones that are utilizing core features only available in iOS 13, this is less of an issue.

More complex tasks don’t have good example code yet. Rather than just searching stack overflow for how to accomplish a task, you might have to read the Apple docs and figure out how to put together multiple pieces that have never been written about before. There just aren’t a lot of examples for how to do things yet, and there’s a lot of new terminology to learn just to be able to sanely google about what’s going on.

Error messages can be misleading. Just like the Swift releases of yesteryear, error messages from using Combine and SwiftUI are not always the most readable or the most accurate messages.

I’ve seen frequent complaints about using [.top, .bottom] as padding EdgeSet when in fact the error was something I was doing in modifiers that follow the element the error pointed at. Sometimes error messages about lines of code being “ambiguous without more context” actually mean that the types don’t match between two calls.

A lot of these new tools are powered by generics in Swift so error messages complaining about T and U might actually be complaining about your own types that the compiler isn’t yet reasoning about correctly.

The real power of Xcode 11 comes from working in Catalina. If you’re like me and happy to jump into iOS betas after the public releases start coming out but much more hesitant about macOS betas, you’ll find that Xcode 11 on 10.14.x doesn’t have the live preview and SwiftUI refactoring power that some of the Apple tutorials mention.

These extra features are only available in 10.15 and unless you want to take that dive early, you’ll have to wait until you upgrade your main computer to take advantage of them.