Thanks to PDFpen for sponsoring BrettTerpstra.com this week!
On June 12th, Smile will celebrate 15 years of making productivity software and providing fast, friendly service to customers.
PDFpen 1.0 debuted at Macworld San Francisco in 2004, and it’s been evolving ever since. The new PDFpen 10 includes watermarking, headers & footers, a precision edit tool, and more. PDFpenPro 10 adds batch OCR, making bulk OCR a snap. Recently-released version 10.1 adds AppleScript support for the new features, including automation for PDFpenPro’s batch OCR.
The world needs better forms. Tripetto includes a visual form editor, a collector for gathering response, and an SDK for developing form building blocks. It’s a self-hosted node app, so not for everyone, but really nice for flowing forms.
Fathom (now available on GitHub) is a new website analytics platform built on simplicity and trustworthiness (recall the now-unsupported Mint?). Get the analytics you need without giving Google (Analytics) access to any of your visitors data. This site should be switching over soon.
Home Assistant is worth noting in addition to all of my mentions of homebridge. Pointed out to me by Adrian Rudman, it has modules for just about everything you can imagine wanting to pull together (and add Siri/Alexa integration to).
Forgive me for pontificating and recollecting like an old man for a while. I’ll be 40 next month, so I’m practicing.
Let’s start in the late 90s. My landlord had informed me that he sold the property I was renting, and I had a few weeks to move out. I’m still not sure that was legal, but my parents had moved out of state but kept the house where I spent my teens, and now I was renting their furnished basement while they were gone.
I lived with Aditi in that house for the first few years we were married. She put up with a lot of my early home automation experimentation (grudgingly). All of the light switches became X10 switches. I installed (poorly-mounted) speaker systems in the bathroom, kitchen, and living rooms, running wires from the SoundBlaster card in my PC in the utility room. An AMP jukebox and voice synthesis app let the house start providing multi-room audio, and even talking to us.
I added remotes around the house. Alarm clocks that could also turn lights on and off. Slow-wake sunrises with the bedroom lights. The TV remote could control the TV, my homemade DVR, and lighting scenes. I hacked a couple of Audreys with WiFi adapters and LCARS menus (handcrafted in Flash) to for touch-screen control of everything, and mounted them in the stairwell and the hall to the bedroom.
Back then, I was always the first one in the bathroom in the morning, so it was easy to have a morning automation routine that was just for me. It would only trigger once during the day, and only between 5 and 6am. The bathroom light shined directly into the bedroom, so a door close sensor would trigger the ramp up of the bathroom and kitchen lights, start the coffee maker, and then proceed to read me the weather and my appointments for the day in a hushed tone.
In a time when most people considered voice control the stuff of sci-fi, I rigged the house’s late 80s intercom system up to my PC running a voice accessiblity program and Homeseer to control all of my X10 switches. It didn’t work terribly well, but I could instruct the computer to “turn on the lights in the living room” as long as I’d left the intercom in “listen” mode (or walked across the room to press the “talk” button that was 3’ from the light switch).
Eventually I got my own house. The electrical system was noisy, and X10 (which communicates mostly over power lines) stopped working as well. Over time I upgraded my system to Insteon, and when I switched to Mac in 2000, I started using Indigo. The hardware interfaces for Insteon also controlled X10 devices, so everything kept working together.
Just like with my Homeseer setup, I could program complex criteria and sequences for my home in Indigo. I could have multiple lights respond to a single switch, and respond differently based on variables.
I could have motion detectors that were smart enough to keep each other active when only one was seeing motion at a time, or trigger different events based on the order that the hallway motion sensors were triggered (so lights can follow you).
I could have open/close sensors on doors. I could have moisture and light sensors trigger anything I wanted to.
I could connect anything I wanted to.
And I could hack anything I wanted to. I took a beam sensor from a grocery store door, originally intended for counting foot traffic, and turned it into a laser trip wire that controlled software variables in the system. That made directional motion sensing way easier. I hacked a Radio Shack mailbox light sensor to announce when the mail had run, waiting until I got home if the house was empty. I combined a series of motion sensors, open/close sensors, and the beengone utility I wrote to determine if I was in my office, routing notifications (and summons from the wife) to blink(1)s and Powermates if I was there.
Then Siri came along, and what I wanted more than anything was to tell her to do some of this stuff for me. Good voice recognition and response means infinitely more possibilities than any switch available. Being able to turn on the deck lights from my watch was an amazing prospect.
When I started, X10 was it. By now, protocols had become disparate and proprietary, and sticking with X10, Insteon, and Z-wave meant no HomeKit compatibility. I’ve since hacked my way around that with homebridge, and it’s been working well. Even though my lights are a combination of various protocols and manufacturers, I can control them from bed with a “Hey Siri,” and turn off the basement lights from the living room by talking to my watch like freaking Dick Tracy.
Over the last year I’ve been adding Hue bulbs, switches, and motion sensors. They’re amazingly responsive and reliable, and I can control them with both Siri and Alexa, though initially not with Indigo. If I didn’t want to use the more limited capabilities of the Home app on my iPhone, I couldn’t really can’t make them talk to each other or control them via a central, scripted platform.
When I started adding Alexa devices to the mix, I ran into more issues with tying it all together. Then I discovered a homebridge plugin that broadcasts the whole system to Alexa as well. I also discovered that there’s a plugin for Indigo itself that provides 2-way integration with Hue products, and one that integrates Alexa right in (without needing homebridge). So now I can incorporate all of these newer products into my scripting and control system.
In addition to being able to control all of my devices, scenes, and routines via both Siri and Alexa, I can also easily integrate things like Flic buttons, thanks to Indigo’s REST API. This means I can have a button under my desk that toggles lights (and absolutely does not close or lock any doors automatically). The API also means that services like IFTTT can provide some glue that would otherwise be overly complex to engineer.
I can use the latest devices, and still have a system where a wall switch determines whether it’s daytime, and executes a different function than it would at night. I can have an open/close trigger on the bathroom door that triggers different routines based on the time of day. I just need to get RFID or BTLE identification working so that I can control actions based on who is triggering them.
I get some of the best voice control I could reasonably ask for (more with Alexa than with Siri), and the best of scriptable automation. Automation hardware is becoming more affordable, though the protocols are becoming more and more fragmented. I’m quite thankful for HomeKit in this area. Manufacturers can have all the proprietary protocols they want, and as long as everything publishes to HomeKit, it all works together. (Same with other home automation protocols, as long as manufacturer’s publish to them.) It is annoying to have to buy a different hub for every protocol, but it’s not as though any serious home automation enthusiast expected minimal hardware to be part of the equation.
While I’d still love to see HomeKit itself becomes more scriptable — allowing full control over all of the devices it recognizes with complex interactions between switches, sensors, and devices — I can’t imagine programming it all on any mobile device. Which, of course, means Apple isn’t going to be interested in developing it. So Indigo gets to fill in the gaps.
I love the idea of the Apple TV or the HomePod as a central brain. For a while there it seemed like the coolest stuff was only going to work when a specific iOS device was around to interface everything. Eventually I might not even have to have an always-on Mac mini processing everything. Honestly, the hardware options (within the Apple ecosphere and outside of it) are getting really good, and the Home app itself isn’t bad. Most of HomeKit’s target audience isn’t going to feel constrained by it. Only the sci-fi-loving nerds who probably have the skills to do it anyway are going to feel the need to build desktop apps and controllers.
I’ve been doing this for a long time, and I’ve never been more excited about the possibilities.
I have a longer home automation post in the works. It’s actually more philosophical than “how-to,” so I’m taking my time with it. My discovery this week bears mentioning on its own, though.
I’ve become more and more enamored with Amazon’s Alexa, and fascinated with its superiority to Siri. Echo dots are relatively cheap, and the Philips Hue integration with the spying little devices is polished. My biggest issue was that the majority of my home is automated using devices that are neither Alexa nor HomeKit compatible, at least not in a way that works with all of the scripting I’ve done previously.
I’d hacked around the HomeKit issues using homebridge, which I’ve talked about before. It requires an always on home server, so it’s not a solution for everyone, but it did the trick for me. I don’t have a HomePod, but I imagine that it would be a nice addition to that integration. What I do have is 4 Echo Dots, and what I wanted was Alexa control over my Indigo setup.
Then over the weekend I discovered that there’s an Alexa plugin for homebridge. The setup is, relative to the Siri setup, really simple. With the combination of my Indigo plugin and the Alexa plugin, I have complete voice control over all of my Hue devices and my Insteon/z-wave devices, as well as access to my custom Indigo Actions and Triggers.
If you have any kind of similar setup, it’s definitely worth looking at the homebridge-alexa plugin. You’ll need an account through cloudwatch to install the Alexa skill (search for homebridge in the Skill section of your Alexa app). You’ll also need to run your homebridge instance in insecure mode (homebridge -I). All together it took me about 15 minutes to have full Alexa access to all of my devices, and I can even add Insteon and Z-Wave devices to Alexa’s “rooms,” so that I can just tell the Dot in my office to “turn on the lights” and it knows which lights to toggle.
It has the further benefit of being allowing me to ask Siri and Alexa to do the same things, and not have to think as much about which one has which capabilities.
In case it didn’t come through in my writing, I’m very excited about this.
I find the bash commands complete and compgen overly mysterious, so I’m often playing with them to try to get a better grasp on all of the poorly-documented options. complete is a shell built-in, no man page, just help complete output. It’s vague.
I’ve detailed some of my other exploits with this in the past, one of my favorites being custom app alias completions. This time I wanted to go a lot simpler.
The afplay command comes default with OS X’s BSD installation. It’s the “Audio File Play” command used to play sound files in compatible formats. I usually use it in scripts to play the system sounds (Glass, Basso, etc.). So I wrote a quick function to make it easier to get to those:
With that in place, I can just call play basso and it will play the sound. I don’t always remember the names of all the sounds, though, which means I have to ls /System/Library/Sounds to see them. A perfect job for shell completion, right?
So here’s the simple script that I source in ~/.bash_profile to give me tab completion of system sounds, listing them all if I haven’t started typing a word yet.
What it does is create an array from the result of listing the sounds directory and getting the base name of every file minus the .aiff extension. Then, rather than using compgen to do the matching, it uses a custom loop to handle case insensitive matching. This would normally work by default with compgen and shopt -s nocasematch, but for reasons I’m not clear on, it doesn’t when you’re providing a custom list.
The _complete_system_sounds function is used to case-insensitively complete the “play” function when I call it, so typing play f[TAB] will offer me “Frog” and “Funk.” afplay completion continues using the Bash default, only my custom function is affected.
Hey, I made a new video for Marked 2. It covers multi-file documents created with Marked 2’s syntax, MultiMarkdown and iA Writer syntaxes, or Leanpub and GitBook formats. This one doesn’t go into Scrivener and Ulysses capabilities, but those are pretty straightforward to begin with (just drop a Scrivener file on it or use Ulysses ⌘6 preview).
It also covers the special tools Marked provides for working with multi-file documents, including its ability to tell you which included file you’re currently viewing, and edit just the part of the document you need to.
Lastly, I cover bookmarking points in the document, navigating with the minimap, and the auto-scroll feature.
I don’t know if you knew this, but Marked has a LOT of tricks up its sleeve. If you haven’t tried it, you can grab a free demo. If you’re already a user, I hope this sheds some light on features you might not already be using!
There was over a year between Marked 2.5.10 and the 2.5.11 update I finally shipped on May 10th. That was way too long, and I realized I’d developed the habit collecting enough fixes, improvements, and new features to make it feel “justified” to release an update, even after I knew it was long past time to ship. That doesn’t fit with modern software practices, especially because it was ultimately an incremental release.
I’ve been revising that habit and releasing on a schedule closer to what Twitter does with their iOS apps, with near-weekly updates that are so minor they rarely get release notes. (Though I have a strong preference for release notes in my releases, beyond just “it makes things better.”) If I fix a couple of things, add a single new feature, etc., I’ve been creating a release. Since the 10th there have already been 5 more updates, and Marked is — as of this writing — at version 2.5.16. I imagine that will be 2.5.17 sooner than later.
My first instinct is “how will anyone talk about new Marked releases if it’s not a bigger deal?” But I think the frequent releases are balancing out the boost of a big release that gets some press by just having a more active and engaged user base building word of mouth over time. People are more likely to recommend something they see as vital and growing. It’s hard to judge at the moment as Apple has given Marked 2 a top billing spot on the Mac App Store homepage, which obviously is going to inflate sales numbers for a brief period. Not complaining.
Here’s a recap of what’s happened in those last 5 releases:
I added a new parameter to the URL handler (x-marked) to allow raising affected windows after running whatever command is passed. I’ve written about the URL handler in detail previously, so for now I’ll just offer this example:
open -g 'x-marked://open?file=filename.md&raise=true'
From a script, that opens a new file and raises the window above all other application windows, but without activating Marked. Your current application remains foreground while Marked’s window is guaranteed to be visible. This parameter is usable with most commands that affects one or more preview windows (open, refresh, paste, preview, style).
I also added the ability for the MarsEdit Preview to handle previewing images still pending upload (requires MarsEdit 4). It also fixes an issue with MarsEdit posts that don’t have a title, which covers the use case of micro.blog and probably other platforms that don’t require a title.
For MathJax, I included a few more standard fonts and trimmed down the configuration options. I also added a feature that automatically normalizes math syntax between MultiMarkdown and GFM/LaTeX formatting. Whereas MMD requires \\[ and \\( to start equations, I’ve found that most people want to use the more standard $$ and $ delimeters. So depending on which processor is selected, Marked will automatically convert one to the other for you. Further, the configuration of MathJax now includes $$ and $ by default. (That has the potential confusion of rendering any paragraph that happens to include two dollar signs oddly, but you can disable MathJax or just escape the characters, e.g. \$3.99 to work around it.)
The color scheme for CriticMarkup elements is improved for readability in both normal and “high contrast” (light on dark) modes. Then there are over a dozen fixes, including issues with curly bracket syntax being stripped when using Pandoc or Kramdown custom processors, code blocks disappearing, and a weird bug where automatically opening Scrivener along with a preview was just inserting a copy of the generated Markdown back into Scrivener’s research folder in an endless loop. Glad that one’s solved…
I won’t be keeping up at this hyper release pace for very long. I need to focus on some other projects now, but I’ve determined that instead of dedicating a full day to a certain project, I need to dedicate a week (or more) at a time and actually start finishing things. It’s been working well so far. I will be releasing new versions as updates and improvements happen, rather than holding them for months at a time. So keep your “Automatic Updates” on (or check the MAS updates page often). And if you read this blog and you’re not already a Marked user, this seems like an opportune time to check it out.
Thanks to KeyKey for sponsoring BrettTerpstra.com this week!
KeyKey is a minimalistic touch typing tutor for Mac. It’s suitable for beginners who want to learn basic touch typing skills, as well as for advanced users seeking to master alternative layouts like COLEMAK or DVORAK.
Touch typing is not about key arrangement, as you might believe. It’s about training your muscle memory, making your fingers remember the micro motions unique to each language. KeyKey knows the most popular letter combinations and words of your native language and utilizes them in lesson generation.
Letter combinations “the”, “tea”, “ate”, “to” are examples of natural micro motions with 4 most popular English letters. That’s why the first lesson starts with varied combinations of these letters:
Lessons are presented in several languages, including English, Spanish, German and French, along with the popular layouts for these languages: QWERTY, COLEMAK, DVORAK, AZERTY, QWERTZ (Swiss) and BÉPO. Lessons can be changed both automatically and manually and you can add punctuation marks, capitalization, and numbers to each of them.
In the near future we plan to add separate lessons for programmers to practice touch typing on real code examples from some popular programming languages.
I don’t know what it is, but inspirational memes (and office posters) make me feel sad. Depressed. Sometimes annoyed or downright angry. Demotivational versions of the same, though, have always brought me joy. I don’t wallow in sadness or revel in insulting others, but the humor of them brightens my day. It reminds me that not everyone is simple enough to be inspired by a non-contextual quote or cute kitten, and that gives me hope.
When I was a younger man, I loved the Demotivators catalog (soon after, despair.com, but I got physical catalogs in the mail back then), and began slowly and quietly replacing all the posters around the office of my first real job with sarcastic versions. The only people who noticed were ones who took the same joy in them that I did, and all of them were still in place when I quit.
Fast forward a decade and I’ve started creating “Dimspirations.” I have over a hundred of them now. Some better than others, obviously, but I’ll take some credit for perseverance. You can see the lot by checking the #dimspirations hashtag on Instagram, at least until someone else starts using it and pollutes that stream. I use the same hashtag on Twitter and Facebook, and you can follow a hashtag on any of these services if you’d like to see more as they come out.
I’ve taken a few of my favorites and created wallpaper versions of them. High res in both 16:9 and 4:3 versions. Do with them what you will, but I’d ask that you leave the “#dimspirations” tag on them and give credit where you feel credit is due. I hope someday to find a way to sell these for actual money, despite giving a bunch of high-res versions away. Printing and shipping is a headache, though, so don’t hold your breath for too long.
You can download them all in a zip file below, or from the wallpaper page in the “other stuff” section of this site. (Where miscellaneous downloads and broken experiments go. It’s basically the messy attic of BrettTerpstra.com.)
If these offend your sensibilities, then they’re not for you. Just know that they make some of us feel better than your “hang in there” poster ever will.
Ok, so KillZapper was kind of cool in the way it let you target specific elements, but after running into issues on sites like CNN where I couldn’t easily control the click handlers and determine parent elements, I decided to just make this simpler. Click this bookmarklet to just kill all iframes and HTML5 video elements on the page without prejudice.
You need to wait until a video player has loaded before it will work, and a lot of sites use lazy loading on video so it isn’t loaded in the DOM until the video is ready to play. But really, the bookmarklet is designed for killing annoying autoplay videos, so it’s generally useful after the video has already started playing anyway.
As a side note, in this era of HTML5 video players, it’s possible to just disable autoplay in many cases.
Here’s the new bookmarklet. A simpler version that just targets all the iframes and video elements. We’ll call it “VidWipe” as a new bookmarklet independent of KillZapper. This is all it does: