Custom Homebridge Plugin for Garage Homekit

Funny story, a few weeks ago I locked myself out because technology. I left the house via the garage to see some neighborhood commotion and realized when I came back that I had been hoodwinked by my own code.

You see, I typically let myself in via a custom developer-signed app that travels out over the internet, back in to the house via a reverse proxy and then triggers an Arduino+relay connected to the door opener. It’s got… a few single points of failure. But it has been quite reliable until that week when I left the house without checking the app first. Developer certificates for apps only last until your current membership expires (at most a year if you installed an app on the day you renewed your membership) and mine had renewed since the last time I used the app - one of the secret perils of extended work-from-home I guess.

But everything worked out and I was able to get back in relatively quickly (quoth @bradfitz: “luckily you have a friend with backdoor access to your home network”) but it prompted me to tackle a project I had been putting off for a while; migrating from a custom app to a custom homebridge plugin.

HomeKit is by far more optimal for this use case: I can ask Siri to trigger it without writing my own Siri intents (which I did for the original app - except HomeKit has a monopoly on asking Siri to open the garage so I had to configure it for “hey Siri open the thing”), the user interface is built-in to the HomeKit app and won’t expire periodically, and I can rely on HomeKit Apple TV home hub rather than a reverse proxy. Less stuff I have to maintain or debug, and the only way I can be truly locked out is if the power is shut off.

getting started

As is customary, the actual code to wire all this stuff up is trivial but understanding the concepts behind the homebridge API is not.

I already had homebridge set up and configured for another project so I focused on how I could create a custom plugin for homebridge and connect it to my existing installation. I started by forking this example plugin project for homebridge: https://github.com/homebridge/homebridge-plugin-template

The installation instructions were great and I had the plugin showing up in homebridge-ui immediately.

Here’s where things start to get tricky: HomeKit garage door support is built with the idea that there’s a sensor that can detect if the garage door is open or closed. This isn’t typically something a non-smart garage door can tell. It’s got a toggle that opens, closes or stops movement from the garage door and your eyes and brain are the indicator that the door has completed opening or closing.

If you look at the Homebridge Garage Door service API docs, you’ll note that it handles a few different states. There is no “toggle garage door” command, but there are triggers for setting the CurrentDoorState and TargetDoorState. In an ideal world we’d trigger the garage door toggle, set TargetDoorState to open, wait for the garage to open and then set CurrentDoorState to open.

Next time:

How to structure your homebridge plugin, and trying things the hard way…

New Zealand Flax Pods

Earlier this year I noticed one of the bushes in the backyard was sending off a bunch of flowers, more than I’ve ever seen on this one bush for sure, and now they’ve fulled developed into seed pods. These were impressive even when they were pre-bloom, they’re probably 8 feet tall and there are something like 10 flowers per stalk over seven stalks that the plant produced this year.

I thought these were super fascinating so I grabbed a few pictures. Turns out these are a variety of Phormium, or New Zealand flax, with bright pink stripes along the side of the broad leaves.

Seeds from New Zealand flax bush

Putting on my very unofficial botanist hat, the pods most likely open up and let their seeds out when they’re still quite high above the ground. The seeds, inside their disk-shaped hulls, then catch the wind, spreading farther than they would if they just dropped directly down.

openjdk cannot be opened because the developer cannot be verified when installing adb via brew

openjdk cannot be opened because the developer cannot be verified when installing adb via brew

If you’re like me and enjoy the simplicity of installing command line tools using the brew command on macOS, you’ve likely run into one or two cases where Catalina prevents you from running a tool that’s been installed because it hasn’t been verified.

In this case, I’m installing the android developer tools for React Native development and needed both adb and openjdk. I’ve used both of these commands to install them:

  • brew cask install android-platform-tools
  • brew cask install java

This situation is similar to downloading a new Mac app from any developer online. Some developers want to distribute apps without the restrictions placed on them by Apple and can run unsigned code - with some restrictions.

The Solution

The issue is that macOS labels all downloaded binaries with a “quarantine” attribute which tells the system that it should not be run automatically before being explicitly approved by the user.

If you’re installing an app, the sneaky way to allow opening unsigned code is to use Right Click -> Open rather than double clicking on the app icon itself. That’ll allow you to approve removing the quarantine and you can open with a double click next time.

This even works in some cases with command line tools: you can use the open some/path/to/a/folder from Terminal to open a folder in the finder that contains adb and then right click it to get the standard bypass quarantine prompt.

The JDK is more tricky since it’s a folder and not an application. You can’t just right click to launch it, instead you have to manually remove the quarantine attributes from the folder where it’s been downloaded. You can do this easily in the terminal with this command:

xattr -d com.apple.quarantine /Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk

The command line tool xattr is used for modifying or inspecting file attributes on macOS. The -d flag removes an attribute, com.apple.quarantine is the quarantine attribute for unsigned code we discussed earlier and the final argument is the path to the file. Your jdk might have a different version or a different tool might be in an entirely different location.


As usual, quarantine is there to protect your computer from unsigned software. Please ensure you trust the developer you’re running unsigned code from before opening it on your machine.

React Native, Typescript and VS Code: Unable to resolve module

I’ve run into this problem largely when setting up new projects, as I start to break out internal files into their own folders and the project has to start finding dependencies in new locations.

In my case, it was complaining about imports from internal paths like import ContactPermissions from 'app/components/screens/contactPermissions';.

The error message tries to help by giving you four methods for resolving the issue, which seem to work only in the most naive cases:

Reset the tool that watches files for changes on disk:

watchman watch-del-all

Rebuild the node_modules folder to make sure something wasn’t accidentally deleted

rf -rf node_modules && yarn install

Reset the yarn cache when starting the bundler

yarn start --reset-cache

Remove any temporary items from the metro bundler’s cache

rm -rf /tmp/metro-*

These cases might work for you if your problem is related to external dependencies that may have changed (maybe you changed your node_modules without re-running yarn or installed new packages without restarting the packager).

In the case with VS Code, this did not resolve my issues. I was still running into issues where modules could not be found.

The Solution

The problem here turned out to be related to VS Code’s typescript project helper. When I referenced existing types in my files, VS Code was automatically importing the file for me - this is usually very helpful!

But for whatever reason, the way my app is set up means that even though VS Code could tell where app/components/screens/* was located (an incorrect import path usually causes VS Code to report an error on that line), typescript had trouble determining where this file lived from this path. Even being more specific about the start of the path with ./app/components/... was not working for the typescript plugin.

What did work was using relative paths in my typescript files. So instead of referencing files from app/components/screens/contactPermssions, I would use ../components/screens/contactPermissions for a file that was located in a different subdirectory of app.

This can be difficult to do manually (remembering what path you’re in and how many directories to go back up, etc), but VS Code can also generate and change these imports for you if it’s configured to do so.

Navigate to your workspace settings, search for typescript import and change the Typescript Import Module Specifier to relative from auto.

Or, do this in your preference json:

"typescript.preferences.importModuleSpecifier": "relative"

FFmpeg exited with code 1, Homebridge and Homekit configuration with Axis camera

If you’re trying to use the homebridge-camera-ffmpeg plugin for homebridge to connect your IP camera to Homekit, you may have run into issues with ffmpeg exiting with code 1 when trying to stream. This usually means ffmpeg can’t launch with the options provided in your camera configuration, but many different things can be going wrong and it’s hard to debug.

[1/18/2020, 8:27:54 PM] [Camera-ffmpeg] Snapshot from Front door at 480x270
[1/18/2020, 8:27:56 PM] [Camera-ffmpeg] Start streaming video from Front door with 1280x720@299kBit
[1/18/2020, 8:27:56 PM] [Camera-ffmpeg] ERROR: FFmpeg exited with code 1

There are lots of ways this can go wrong, so here are some steps to figure out where you might be having issues.

The Solution

First, confirm ffmpeg is installed and runs on your homebridge server. Just run ffmpeg at the command line and confirm it runs. Here’s what running it successfully looks like:

ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
  configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu... etc

You may want to note the codecs that ffmpeg has been installed with. For my particular Axis camera, it was important to have h264 support so you’ll look for --enable-libx264.

Next, you need to make sure you have the right video and image source URLs for your axis camera. There are quite a few variations. Here is how the full configuration looks:

{
  "platform": "Camera-ffmpeg",
  "cameras": [
    {
      "name": "Front door",
      "videoConfig": {
        "source": "-rtsp_transport tcp -i rtsp://user:pass@1.2.3.4/axis-media/media.amp",
        "stillImageSource": "-i http://1.2.3.4/jpg/image.jpg?size=3",
        "maxStreams": 2,
        "maxWidth": 1280,
        "maxHeight": 960,
        "maxFPS": 30,
        "vcodec": "h264"
      }
    }
  ]
}

Both source and stillImageSource urls can be looked up on this axis endpoint chart. Note that you need to add a username and password in the URL if configured, and of course substitute your own camera IP in for 1.2.3.4.

Lastly, if you still can’t figure out what’s going wrong, enable debug mode for your homebridge-camera-ffmpeg source and get more information:

...
  "maxFPS": 30,
  "vcodec": "h264",
  "debug": true
...

This will give you more info about what the plugin sees from your camera and what the result of the ffmpeg call is when trying to fetch the stream. You should attempt to look at the video stream in Homekit to kick off the ffmpeg process.

xcodebuild: error: Could not resolve package dependencies with Fastlane and Swift Package Manager on CircleCI / Bitrise

The Problem

If you’re running tests on your iOS build CI pipeline with fastlane, you might run into an issue when running scan using Xcode 11+ if you’ve got some Swift package manager dependencies. The full error might look like this:

[18:44:50]: ------------------
[18:44:50]: --- Step: scan ---
[18:44:50]: ------------------
[18:44:50]: $ xcodebuild -showBuildSettings -workspace FiveCalls/FiveCalls.xcworkspace -scheme FiveCalls
[18:44:53]: Command timed out after 3 seconds on try 1 of 4, trying again with a 6 second timeout...
xcodebuild: error: Could not resolve package dependencies:
  An unknown error occurred. '/Users/vagrant/Library/Developer/Xcode/DerivedData/FiveCalls-gpqeanjdlasujldgqrgmnsakeaup/SourcePackages/repositories/Down-9f901d13' exists and is not an empty directory (-4)
xcodebuild: error: Could not resolve package dependencies:
  An unknown error occurred. could not find repository from '/Users/vagrant/Library/Developer/Xcode/DerivedData/FiveCalls-gpqeanjdlasujldgqrgmnsakeaup/SourcePackages/repositories/Down-9f901d13/' (-3)

Coming from this fastfile:

  desc "Runs all the tests"
  lane :test do
    scan(workspace: "MyProject.xcworkspace",
         scheme: "MySchemeName")
  end

The problem here is that Xcode is resolving package dependencies and the build system isn’t waiting for that process to complete. Usually this works fine locally, so something is off with the CI timing here.

The Solution

According to this issue on the fastlane github, the problem should be resolved by updating fastlane to 2.138.0+. That didn’t fully resolve the issue for me, and there’s another way to force updating dependencies before building.

You can force xcodebuild to resolve the dependencies in a separate step beforehand, and scan won’t run until this completes.

  desc "Runs all the tests"
  lane :test do
    Dir.chdir("../MyProject") do
      sh("xcodebuild","-resolvePackageDependencies")
    end
    scan(workspace: "MyProject.xcworkspace",
         scheme: "MySchemeName")
  end

In this example our fastfile is in a fastlane directory adjacent to our project directory, so to move from the fastlane directory to our project, we move up one directory and into our project directory (the one with our xcproject file). You may need to adjust this for your project setup.

Should you write your app in SwiftUI?

I’ve hit a few roadblocks when working on Read & Share and I’m working on building separate screens in isolation while I wait for improvements from the next Xcode and SwiftUI beta (maybe next week?) to really tie things together.

It’s frustrating to not be able to move forward on the whole app flow, and I will admit that once or twice I thought about rewriting the app without SwiftUI. But at the end of the day I’m making something fun for myself, I don’t have a huge deadline looming and I wanted to learn something new that I can use to prepare for the future of Swift.

Over the next few months as we hit iOS 13 release and beyond, more and more folks will be able to start using SwiftUI to develop new parts of existing apps or start apps from scratch and ask themselves if they should jump into SwiftUI - and for the pedants in the crowd, I’m using SwiftUI to mean both the SwiftUI and Combine frameworks.

Here are my thoughts from using SwiftUI for the last few months and if you should write your next app using SwiftUI:

Pros

It’s easy to get started with the basics. Apple has a really great set of tutorials for getting used to building UIs with SwiftUI and even interacting with UIKit components from SwitfUI.

If you want a taste of how developing in SwiftUI feels, these tutorials are great at walking through the logical steps of building one part of an app.

Developing your UI is significantly faster - even faster than using Storyboards! Between the visual previews that are provided on the tutorials and the speed at which you can preview your work in Xcode, this can significantly speed up the amount of time spent iterating on how your UI behaves.

Also, UI customization is not hidden in storyboards or nib configuration files. It’s all based in your SwiftUI views and not spread across multiple areas like it could be if you configured your views in nibs and code.

Refactoring UI is a simpler process. One of the great parts about SwiftUI is it’s easy to see when your view code is getting long and pull out subviews for refactoring. I’ve been noticing three distinct steps:

  1. Start building your UI in one View
  2. During active development, break out views that are complex or repeated into new Views in the same file
  3. Once the dust settles (or the new View grows in size), move these Views into their own files or groups

It’s totally reasonable to have multiple small View components in a single file, but once they start being used from multiple locations or have their own helper methods, it’s time for them to get their own file.

Lastly, let’s not forget the experience of learning something new. You’ll be learning something new, but with some of the Swifty comforts you’ve become used to. This is actually pretty fun! You can usually iterate quickly and solve your problems (as long as you don’t run into functionality blockers) like the ones that are common during the beta phase.

Cons

Starting with the obvious one: your SwiftUI apps will only work on devices with iOS 13 and higher. For those of you with a large existing install base, making everyone update to iOS 13 to get the latest updates might not be the best way to treat your users. Keep in mind older devices will still be able to get the latest version of your app available for iOS 12, but not any new updates that are iOS 13-only.

For new apps, particularly ones that are utilizing core features only available in iOS 13, this is less of an issue.

More complex tasks don’t have good example code yet. Rather than just searching stack overflow for how to accomplish a task, you might have to read the Apple docs and figure out how to put together multiple pieces that have never been written about before. There just aren’t a lot of examples for how to do things yet, and there’s a lot of new terminology to learn just to be able to sanely google about what’s going on.

Error messages can be misleading. Just like the Swift releases of yesteryear, error messages from using Combine and SwiftUI are not always the most readable or the most accurate messages.

I’ve seen frequent complaints about using [.top, .bottom] as padding EdgeSet when in fact the error was something I was doing in modifiers that follow the element the error pointed at. Sometimes error messages about lines of code being “ambiguous without more context” actually mean that the types don’t match between two calls.

A lot of these new tools are powered by generics in Swift so error messages complaining about T and U might actually be complaining about your own types that the compiler isn’t yet reasoning about correctly.

The real power of Xcode 11 comes from working in Catalina. If you’re like me and happy to jump into iOS betas after the public releases start coming out but much more hesitant about macOS betas, you’ll find that Xcode 11 on 10.14.x doesn’t have the live preview and SwiftUI refactoring power that some of the Apple tutorials mention.

These extra features are only available in 10.15 and unless you want to take that dive early, you’ll have to wait until you upgrade your main computer to take advantage of them.

Read & Share Build Log #1

I’ve been working on a project that I’m aiming to release with iOS 13 later this year, and I’ve decided to do some build logs with interesting features or new things I’m learning here. I talked a bit about it on twitter:

The idea for Read & Share stems from a) my interest in using some new features from iOS 13 in production and b) my newfound reading time during my commute where I wanted to share what I was reading on Twitter et al but didn’t have the tools to do so - not all of us can have that Notes.app screenshot aesthetic.

This series will be a mix of how I build features that I’m familiar with as well as experiments with the newer iOS 13 and Xcode 11 features that we’re all unfamiliar with.

Even experienced iOS engineers are newbies again with SwiftUI and Combine, and the incredible field of posts about working with new features shows how fresh even the basics are for everyone.

Let’s get right to the first build log:


The fundamental piece of UI here that everything else feeds in to and out of is the highlighting screen, so that’s where I’m starting the app. There are lots of pieces that I know how to do already (but maybe not in iOS 13, who knows!) this is at least one piece that I’m going to iterate on a lot, so I might as well get a first version in.

Text comes into the app in various ways - sharing existing highlights from e-readers, copy-pasting chunks of text and even taking camera shots from physical books - and it all hits the highlight screen where you can select the part you want to share. After that you can tweak the book source or play with the share style, but all of these other elements flow through this one interface that needs to be intuitively understandable through a range of use cases.

highlight flow

I started working on this exact interface in SwiftUI and realized that I didn’t know anything about it, then restarted it in UIKit where I was much more familiar. Eventually I’d like to rebuild all of this in SwiftUI but I’ve settled for building the easy stuff (Drawers! Navigation! Tabs!) in SwiftUI and giving myself some breathing room on the custom UI in UIKit for now.

That’s one of the nice parts about SwiftUI: you’re not completely cut off from UIKit if you don’t want to be, but there’s some boilerplate to connect the two. We’ll most likely cover this in an upcoming post too.

Making selections

The end goal here is making it easy to tap and drag to select text, which sounds easy but there are a number of steps to be able to do this easily:

  1. Get bounds for each word
  2. Get tap points
  3. Manage word selections
  4. Draw stylized highlight layers

Support for finding text bounds in UITextView is pretty good, so I’ve picked that for the base text display. I started by using firstRect(forRange:) to find rects for each word that can be selected.

Getting the our rects requires a string Range which is not quite the same as a standard index. You can refresh your Swift string knowledge here, but the short version is that we need to do some extra steps to finally get to a Range that we can use to get our word rects.

Originally I implemented this with the first method I saw, range(of: string), and it was a good starting point for validating what the rects looked like so we could use them both as the basis of the highlight shapes and to determine if taps have hit a word. Eventually though we needed to generate these ranges for each word, not just the first occurance of a word like the simple range(of: string) will give us.

Two sub-optimal parts here: first, Scanner is not as Swift-friendly as we’d like but a pointer to an optional NSString, i.e. &NSString?, will do the job when the docs say it’s looking for AutoreleasingUnsafeMutablePointer<NSString?>?. Second, this code is not very unicode-safe as it is. I’m doing some character counting here which is not directly compatible with how String tries to simplify complex multi-character glyphs into String.Index. I’ll continue to refine this component during this process, and one of those steps will include checking unicode support. For now, this’ll do fine.

The entire block scans up to the next whitespace, gets the start and end position (as UITextPosition) for each word, uses that to get a UITextRange which in turn is used to get a CGRect for that word. Text is static once it’s in the highlighter (for now), so computing everything upfront makes sure we have all the data we need for the rest of our highlighting step.

func loadRects(fromTextView textView: UITextView) {
    var rects: [WordRect] = []
    
    var currentScanPosition = 0
    let scanner = Scanner(string: textView.text)
    while !scanner.isAtEnd {
        var nextWord: NSString?
        scanner.scanUpToCharacters(from: .whitespacesAndNewlines, into: &nextWord)
        guard let existingNextWord = nextWord else { return }
        
        let startPosition = textView.position(from: textView.beginningOfDocument, offset: currentScanPosition)
        let endPosition = textView.position(from: textView.beginningOfDocument, offset: currentScanPosition+existingNextWord.length)
        
        if let textRange = textView.textRange(from: startPosition!, to: endPosition!) {
            let rect = trimmedRectFromTextContainer(textView.firstRect(for: textRange))
            rects.append(WordRect(withRect: rect, andText: existingNextWord as String))
        }
        
        currentScanPosition += existingNextWord.length + 1
    }
    
    self.wordRects = rects
}

Once I have the word rects, taps are sent to the selection manager which applies any selection rules. If you tap on the first word and the last word, the app should highlight all the words in the middle for you - this logic and more is all handled in the selection manager.

Finally, the view controller takes the selections and, knowing a bit about the rules for how text can be selected, makes custom CAShapeLayers displayed in the layer behind the UITextView.

highlight process

The separation between what happens in the selection manager and the view controller is at the display level. The selection manager shouldn’t need to know anything about the layout of the screen, just the basic rules for how to select text. The parent view controller can handle both a conversion from taps → word rect hits as well as selected rects → highlight layer locations.

Paying for Open Source

GitHub lunched a new feature yesterday, sponsorship for open source developers.

On its face this seems like a great idea, people who write open source software largely get nothing right now, so more than nothing must be better, right?

But as many folks are right to point out, this is not as simple as it seems. Open source maintainers are already subject to entitled users demanding attention to their pet feature, even if it’s not an explicitly supported use case.

Sponsorship brings a whole new level of “you owe me” to small software that is a dangerous trap to fall into, especially for newer developers. Even without money, writing software can be a trap:

If you’re a young developer writing software for the first time, maintaining and supporting that software feels like your only choice! I spent more time than I’m comfortable with supporting software that I no longer used or cared about because users of that software demanded it, and that’s not how you should treat something that you do for free.

The problem with adding money into the mix is that the guilt of open source is even stronger if you’re taking money from people, and it’s unlikely to make 90% of developers enough money to actually be meaningful.

But I guess the part that bothers me more is that it seems designed around individuals supporting open source developers that write software they use. In reality, most of the monetary value of using open source code is actually gained by startup software companies who make money on services built on top of this free software, not individual developers throwing together a hobby project.

This is a lot more explicit on services like Open Collective where there are already first class user types for companies rather than individuals, and companies that support open source software are promoted in a different way that helps make this practice more widely held and sustainable. Just check out the babel project where you can clearly see support from AirBNB, Adobe, Salesforce and others.


One interesting note that has been overlooked: GitHub’s support for a new FUNDING.yml file which defines a user on various open source funding services. In cocoapods-land, we have a plugin which collects all the licenses from the open source pods you’re using and compiles them automatically into an acknowledgements file for use in your app, so you can properly attribute the open source code you’re using. What if we did that, but for supporting open source code?

In fact, this is exactly the approach suggested by a compelling, if unfinished, project that @aboodman was working on a while back, called dot-donate. Making it easier to support developers would go a long way to making developing open source code a sustainable job, rather than a guilt-driven side project.

This is a treacherous first step for GitHub, I hope they can turn it into something that makes the practice of supporting open source code a startup-driven endeavor, rather than an individual one.

laquo encodings

previously on laquo.net

Since 2005 this site was a single serving site for references to the unicode entity raquo, », along with some of the other members of the “quo” family. 15 (!!!) years later I still get emails telling me people find it useful, so hopefully this page will get picked up as the replacement reference for that content.

«
html:
&laquo;
html:
&#171;
uri:
%C2%AB
mac:
opt+\
win:
alt+0171