Eight Years of Running a Mac Mini Server

In late 2009 I migrated all of my websites to my personal Mac Mini server, which is hosted in MacMiniColo’s data center (now part of MacStadium). You can read about my reasons for moving from hosted services to my own server here.

I’ve never looked back, and have mostly enjoyed having my own server because of the freedom it gives me to experiment and customize my environment.

Mostly.

When I first got the server, I was new to Linux and was really happy Apple provided Server.app, which is a GUI for the standard fare of services, including Apache, mail, FTP, VPN, and certificate management. I had previously dabbled in Linux server administration via hosted services and Microsoft IIS at my workplace, but it’s safe to say I was still a n00b. Server.app handled the heavy lifting and made it easy for a lightweight like me to get a simple site up and running.

Almost exactly eight years later, I’ve replaced the hardware once (a newer, faster Mini), have updated macOS seven times, and replaced Server.app six or seven times. Through it all, the Mini (and MacMiniColo’s hosting) has been solid. The software? No so much.

Apple’s Server.app is a compilation of open-source software, which sounds great — plenty of people use the same software and there are literally thousands of how-to guides on the interwebs. Except… Apple in their wisdom decided to customize pretty much everything, which meant the aforementioned guides were often useless, causing endless headaches. (On the bright side, my Google-Fu has grown immensely.)

Over the past few years, HTTPS has become an increasingly important part of web hosting. Before the advent of Let’s Encrypt, I had purchased a couple of commercial SSL certificates (WOW they’re expensive) and installed them via Server.app. This was not very difficult. But as I started adding more and more sites and SSL certs to my server, I started running into really weird Apache errors, which often caused ALL of my sites to become unavailable. Remember, Server.app was doing the Apache config, not me, so it should have been as easy as drag-and-drop. Finding solutions to these errors proved to be incredibly painful, as there are very few resources for Server.app, and even fewer that are up-to-date. Every Apache troubleshooting guide I’d find referred to the standard Apache installation, not the Apple-flavored installation, which stored files in completely different locations and included many modifications.

But I soldiered on, eventually sorting out each issue and hoping it would be fixed in the next version of Server.app.

Last month I finally reached a tipping point. I purchased a domain name for my wife and created a placeholder site on my server. When I added an SSL cert for the new domain, all of my sites went down (again), and I kept getting cryptic Apache errors (again).

I seriously considered switching to a hosted service and giving up the Mini, but my prior experience with hosted services was horrible, and it would likely cost even more than what I pay for the Mini.

I decided to focus on getting out from under Server.app’s grip. Two of the most appealing paths:

  1. Go the Homebrew route and install all the key software (Apache, SQL, PHP, etc.) via Homebrew.
  2. Run a Linux server in a VM.

I love Homebrew and use it frequently on my MacBook. I knew it would work well on a server. However, when I gave it a try, I had the darndest time getting Server.app to let go of resources. I was running into conflicts left and right, even after uninstalling Server.app and running cleanup scripts. I put Homebrew on hold, thinking maybe I’d need a clean install of macOS to build on, but I wasn’t ready to nuke my server just yet.

I started looking into virtualization. Having worked with server virtualization (Proxmox) at my day job, I was excited to give a virtualized environment a try on my Mini. The Mini is not a powerhouse, and only has one network card, but I figured I run VMs on my MacBook Pro all the time, the Mini should be able to handle it as well. Worst case scenario, it would be a learning experience and I would go back to macOS or maybe a commercial hosting service. Best case scenario, I’d have new life for my server.

I downloaded VirtualBox and used my MacBook as a testing ground to see if I could get a proof of concept up and running. I managed to get a simple but powerful LAN going in just a few hours — pfSense handled all NAT and port fowarding, and an Ubuntu server VM provided the LAMP stack. It was working very well for a proof of concept, but I still had reservations about macOS running underneath, and those pesky conflicts caused by Server.app on my Mini.

I decided it was time for a clean install of macOS on my Mini. I got in touch with MacStadium’s (formerly MacMiniColo) very helpful support staff, and they mentioned VMWare’s ESXi was available for their customers, and that they’d handle the ESXi installation, free of charge.

If you’re not familiar with ESXi, it’s VMWare’s free hypervisor offering. Similar in concept to VirtualBox, but designed to be run “bare metal”, as an operating system on the hardware, not on top of macOS. Since ESXi runs as an OS, it’s notoriously tricky to install on a Mac, especially if your server is hundreds of miles away in a data center. I jumped at the chance to get it installed by folks who know what they’re doing.

I spent the last three weeks sorting out the architecture and am pleased to announce it’s all up and running. My sites, including this one, are now being served via an Ubuntu VM on ESXi, running on my Mac Mini in Las Vegas. Finding documentation for Ubuntu has been super easy, and tasks that were previously time consuming and manual, such as obtaining and updating Let’s Encrypt certs, are now completed in a few minutes.

It was a time-consuming transition, which explains why my sites were down for so long (sorry), but I’m really glad I made the switch. A few weeks in, and I don’t miss Server.app or macOS at all. If all goes well this server setup should last for years (with security updates, of course).

I hope to write a more detailed account of the architecture in a future post.

 

Using Scraper on RetroPie

RetroPie is a fun little arcade system that runs on Raspberry Pi. It includes Emulation Station, which allows the user to select games using a USB game pad or joystick instead of a keyboard.

One of Emulation Station’s features is a scraper, which analyzes your library of game ROMs and tries to download the appropriate artwork and game metadata from online databases. If successful, when you browse your library you will be presented with nice art and game descriptions.

I was excited to try the scraper, but ultimately found Emulation Station’s scraper to be very hit-or-miss.

Dozens of online forums and articles laud Steven Selph’s Scraper as being faster and more thorough, so I decided to give it a try. I was able to get Scraper installed rather quickly using the official RetroPie instructions for installing Scraper, but they unfortunately don’t give you much guidance beyond installation.

I rolled up my sleeves and spent a few hours tinkering. Here are my notes.

My first obstacle was how to access Scraper after installation. Seems silly in retrospect, but this took me quite some time to figure out.

Quit Emulation Station (F4 on your keyboard). You will be taken to the RetroPie command line (shell).

Note for advanced users: You can also run commands from an external computer if you have enabled SSH in RetroPie. I used SSH for most of the tasks detailed below; it was especially handy to have SFTP enabled for managing files. (video demonstration).

Using the command line, launch the RetroPie setup script:

sudo ./RetroPie-Setup/retropie_setup.sh

If you’re unfamiliar with the command line, an .sh file is a shell script. In this scenario, RetroPie-Setup is the folder containing the script, and retropie_setup.sh is the name of the script. Placing ./ in front of the path to the script tells the system to run the script. To run the script as an administrator, begin the line with sudo (“superuser do“).

The line above is equivalent to

cd RetroPie-Setup
sudo ./retropie_setup.sh

This will bring you to the RetroPie setup menu.

RetroPie setup menu

Go to “Configuration / tools”. You will be presented with a menu of options. Scraper will be listed near the bottom:

RetroPie configuration menu

Select Scraper and hit OK. You will be presented with the Scraper menu.

Scraper menu

I changed a few options and then let it “Scrape all systems”. It worked pretty well, but there was one thing that bugged me: the artwork Scraper grabbed was usually comprised of old posters or cabinet art, which often looked nothing like the game itself. I just wanted to see snapshots from within the game.

Turns out if you use Scraper on non-RetroPie systems, you have the option to specify a preference via command line flags. For example, you can specify an order of preference, with the options of snapshots, boxart, fanart, banner, and logo.

I tried for quite some time to run Scraper via command line in RetroPie, using the flags specified on the Scraper site, but I often encountered errors about specific flags not being supported. The only method that worked consistently was launching Scraper as I described above. But the option to specify a preference for image type is not built into the menu. Turns out this is a known limitation.

But where there is a will, there is a way. I looked at the contents of the scraper.sh file, and it was pretty trivial to add the missing flags directly to the file using a text editor.

The Scraper script file is located at

/opt/retropie/supplementary/scraper

When accessing the folder using an SFTP app, it looks like this:

Scraper folder in SFTP app

 

I right-clicked scraper.sh in my SFTP app of choice and opened it using a text editor.

Line 82 had  params+=(-skip_check), so I added my own line directly underneath it:

params+=(-skip_check)
params+=(-console_img "s,b,3b,l,f")

I am telling Scraper to get images for console games in this order of priority: snapshots, box art, 3D box art, logos, and fan art.

But my primary focus is arcade games, not console games; arcade games use a different flag for artwork. Looking further down the file, I noticed the line I wanted was 114. It already specified an order of priority for arcade games. I edited it to use my preferred order of priority: snapshots, marquees, then title.

[[ "$system" =~ ^mame-|arcade|fba|neogeo ]] && params+=(-mame -mame_img s,m,t)

I saved the file, closed it, then re-ran Scraper using the steps listed above. To my surprise, I didn’t see any significant changes to my artwork. Turns out the old artwork was still there, and Scraper only looks for art if no art exists. I needed to delete the old art first!

According to the scraper.sh file, the images are stored in $home/.emulationstation/downloaded_images/$system. For arcade games, this translates to  .emulationstation/downloaded_images/arcade.

Using the command line, I navigated to the parent folder:

cd .emulationstation/downloaded_images/

Then I removed the entire arcade folder:

sudo rm -r arcade

Warning, the sudo rm command is dangerous, it will delete whatever you specify. Be careful not to enter any typos.

When re-running Scraper, the snapshots downloaded as intended, and game menu in Emulation Station now displays the snapshots instead of old posters.

Hat tip to all of the open-source developers and others who created the tools and documentation that helped me sort this out.

PDFObject 2.0 released

After almost eight years in the making (and nearly 7 years of procrastinating), PDFObject 2.0 has arrived.

PDFObject is an open-source standards-friendly JavaScript utility for embedding PDF files into HTML documents. It’s like SWFObject, but for PDFs.

Version 1.0 was released in 2008 and has enjoyed modest success. Based on stats from PDFObject.com (including devious hot-linkers) and integration with 3rd-party products, I’m guesstimating it has been used on well over a million web pages. (If I had a nickel for every time it was used…)

I updated it a few times over the years, but generally only if someone reported a compatibility issue. Like an old beat-up car, it was a bit crusty, but still ran like a champ. That is, it ran like a champ until the rules of the game were changed — when Microsoft changed their ActiveX strategy in Internet Explorer 10-11 and Microsoft Edge, PDFObject’s checks for ActiveX began to fail, rendering PDFObject useless in those browsers. This marked the beginning of the end for PDFObject 1.x.

An update was overdue, yet I let it sit for a couple of years – I fully admit that kids, my job, and life tend to take precedence over an unfunded open-source project. But I never stopped thinking about PDFObject. I intentionally kept it at arm’s length for a while; I was fascinated by changes in the front-end development world, and waited to see how things would shake out.

It’s incredible how much has changed since 2008. For starters, the browser landscape has completely changed. Chrome, which didn’t exist when PDFObject was first released, now rules the land. It also happens to include built-in PDF support. PDF.js was invented, and eventually became Firefox’s default PDF rendering engine. Safari renders PDFs natively using Preview. iOS and Android exploded onto the scene, as did Node.js and NPM. Conversely, Adobe Reader’s market share took a nosedive thanks to browser vendors making Adobe Reader less relevant, not to mention disdain for Adobe Reader’s bloat and security holes. And, of course, HTML5 is now official, which means the <embed> element is officially sanctioned.

PDFObject 2.0 is a complete rewrite that tries to take all of this into consideration. It supports PDF.js. It’s packaged for NPM. It uses the <embed> element instead of the <object> element (not going to rename it to PDFEmbed though). It doesn’t pollute the global space and uses modern JavaScript conventions. It supports all CSS selectors, not just IDs. If you’re feeling frisky, you can even pass a jQuery element instead of a CSS selector (note: PDFObject does not require jQuery). Lots of little changes, which I hope add up to a better experience, wider compatibility, and lots of flexibility.

If you’d like to learn more about PDFObject 2.0, please visit the official site (completely redesigned as well), with examples, documentation and a code generator: http://pdfobject.com

The code is up on GitHub, and has been posted to npm.

Demos for LearnSWFObject have been moved

For the record, all demos for LearnSWFObject.com have been relocated from my personal server to GitHub. The root URL has changed from demos.learnswfobject.com to learnswfobject.com/demos.

This enables me to keep all of the demos in the same repo as the primary LearnSWFObject site. The site and demos are old, and have not been updated for years, but are still useful to some members of the Flash community. Moving the files to GitHub is a nice way to keep the tutorials and demos online while reducing my personal burden for hosting the sites.

SCORM on Google Trends

Interesting stats from Google: SCORM is clearly on the decline, as is AICC, but both still much stronger than xAPI (aka Tin Can), which is barely registering.

2004-present (10 years)

2009-present (5 years)

2012-2014 (2 years)

“experience api, tin can api weren’t searched for often enough to appear on the chart. Try selecting a longer time period.”

Does this mean anything? I dunno. But it’s interesting to see SCORM’s steady decline over the last 10 years. Also, please forgive the un-responsiveness of the graphs, Google hard-codes the width in px.

Convert “localhost” to your Mac’s current IP address

When developing web pages, I use MAMP.app or my Mac’s built-in Apache. Viewing the page means using an address such as http://localhost/mypage.html. If you use custom host names (especially easy with the excellent VirtualHostX.app), you may wind up with a localhost address such as http://projectname/mypage.html.

This works great when you’re just testing the pages on your Mac’s browsers. However, once you cross boundaries into Windows testing (via VMs or separate laptops), localhost will no longer resolve. Why? Because localhost is local to your machine.

If you want to view the page in a VM or on another machine, just swap the domain name with your machine’s IP address. For example http://localhost/mypage.html becomes http://10.0.1.14/mypage.html. (Note: you must be on the same network or have a public IP address.)

This works very well, but it’s tiresome to manually grab the IP address anytime you want to use a VM or share the page with coworkers, especially if you’re on DHCP and don’t have a static IP address.

I decided to make my life a little easier by writing an AppleScript that looks at the open tabs in Chrome and Safari then replaces “localhost” (or custom domain) with my current IP address. Saving this as a service enables me to go to Chrome > Services to run the script.

Chrome > Services

If you’d like to give it a try, the AppleScript is available as a Gist on GitHub.

I guess there’s no such thing as a secure PDF

I was reading the SCORM 1.2 reference docs today. I wanted to copy a passage for my notes, but the PDF is password-protected and prevents anyone from copying text. (REALLY irritating, considering the ADL is a quasi-government organization and the docs should be open to all.)

What to do? Well, turns out there are at least two super easy ways to bypass the password protection: Upload it to Google Drive or import it to Evernote.

Google Drive

The Google Drive site includes a built-in PDF reader; when I opened the PDF in the web viewer, I was able to copy text freely. Better yet, I was able to save the PDF as an unlocked file by selecting “Print” then choosing “Save as PDF” in the print options.

Evernote

When dragging the file onto the Evernote app (Mac), the PDF shows up in a preview window. I was able to copy text freely. No need to save as a PDF since it’s already stored in Evernote!

Security, schmescurity

So I guess there’s no such thing as a secure PDF. I’m sure there are other services like to Google Drive and Evernote, and there are definitely other techniques for defeating protection, including screen captures, OCR, and the old fashioned approach of printing to paper then scanning the prints. If you truly need a document to be secure, don’t distribute it electronically.

iTunes, TV Shows and Apple TV

iTunes vexes me. For better or for worse, we’re an Apple household and own an Apple TV, so I’m kind of stuck with iTunes for managing my media files.

My wife and I have also purchased a significant amount of DVDs over the years, which I ripped to iTunes using the trusty old Handbrake (love you, Handbrake!). These DVDs include a lot of TV shows, such as Doctor Who and Magnum PI.

My workflow has always been: rip via Handbrake, then import into iTunes by dragging the m4v files onto the iTunes window. By default, the TV shows don’t have any metadata (no proper titles, descriptions, episode numbers, or artwork), and iTunes automatically files them under Movies. This means they’ll show up in Apple TV with no description, no preview picture (such as DVD box art), and no sequence information.

I recently heard someone mention iDentify, a Mac app that adds metadata to movie files. It’s not free, so I had reservations about buying it. However, $10 is a small price to pay for cleaning up such a big mess, especially if you’re a bit OCD like me. I decided to give it a try, and it works very well, especially for TV shows — if you manually specify each file’s season and episode number, iDentify will take care of the rest by performing lookups at thetvdb.com. Sweet.

iDentify took care of the metadata and artwork problem, but the files were still cluttering my Movies menu, making it very hard to navigate with a remote control. For example, Magnum PI went eight seasons and has over 150 episodes, so we’d have to navigate past 150 Magnum PI titles to get to any videos whose name began with N-Z. Very annoying.

For a long time my workaround was to create custom genres and shove the TV shows there, then stick to genres when navigating Apple TV. This always felt kludgy, and I wondered why I couldn’t just drag the TV show episodes onto the TV Shows section in iTunes. This weekend I decided to look into it, and stumbled onto a MacWorld article containing a solution so simple I had to do a double face-palm: change the Media Kind from Movie to TV Show.

iTunes file properties dialog, 'Options' tab

Once set, the video is automagically moved from the iTunes Media/Movies folder to the iTunes Media/TV Shows folder, and shows up in the TV Shows menu!

Be sure to input the show’s name in the Video section so the episodes will be properly grouped.

iTunes file properties dialog, 'Video' tab

The MacWorld article pointed out that this technique can be extended to group ANY videos. This piqued my interest — my wife and I own a lot of DVDs that contain high-quality special features, including the entire James Bond collection, Star Wars collection, and classic films like Lawrence of Arabia. As I mentioned, I’m partially OCD, so I’ve ripped quite a few of these special features. Until now, they’ve all cluttered up my Movies menu just like the TV shows did.

TV Show grouping to the rescue! By changing the videos’ Media Kind to TV Show, they get moved to the TV Show section and can then be grouped. For example, I grouped all of my James Bond special features under the heading “James Bond Featurettes”. Now when I navigate the TV Shows section of iTunes or Apple TV, I only see ONE listing for James Bond Featurettes and no longer need to sift through 100+ titles.

iTunes still leaves a lot to be desired, but I’m a happy camper now that my files are well-organized and have proper metadata.

Setting OS X Desktop Picture Based on Time of Day

I recently changed jobs (Hello, FireEye!) and was issued a new MacBook Air. I spend a lot of time looking at the screen and was getting bored with the supplied desktop pictures. I also start work very early most days (7am-ish), and thought it would be nice to have a desktop picture that matches the mellow-ness of such an early hour.

Of course, this leads to daydreaming — “scope creep” in professional parlance — and next thing you know, I started thinking “well, maybe I could also set it to show a nice evening-themed picture at night”. Then “maybe I can get it to change both screens” (I use a laptop with an external display).

I also liked the challenge of putting together a script as quickly as possible. (In my off-hours, of course!)

I downloaded some nice wallpaper images from National Geographic, then created six folders that correspond to the major periods of the day: morning (early and late), afternoon (early and late), and evening (early and late). I organized my National Geographic photos into those six folders, based on the mood each photo evokes. For example, this one is an early morning photo.

Then I rolled up my sleeves and got out the trusty old AppleScript Editor. The resulting AppleScript is posted on GitHub, if you’d like to take a gander.

The gist:

  • It selects a folder based on the time of day.
  • It randomly selects an image from within that folder and displays it as the desktop picture.
  • It supports more than one monitor, with an option to either display the same image on all monitors, or display different images on each monitor.

The resulting AppleScript must be run at a regularly scheduled interval. I’m currently using GeekTool to run the script every 15 minutes, but I might eventually switch to a crontab job for less overhead.

Regardless, I’m quite happy with the way it turned out, and have already started daydreaming about other things I can hack together with AppleScript.