🪳ITM: Roach in the middle

Sat Sep 7 '24

DSCF9733.JPG

A year ago, I bought my first Fujifilm camera: an X-T1. This eleven year-old camera was such a delight to use that it quickly relegated my full-frame Nikon mirrorless to a glorified webcam, despite being in an entirely different weight class.

Part of why I love the Fuji cameras so much is that it speeds up my image workflow. On the Nikon, I always had to set aside time to edit my photos to get them to look the way I wanted. On the Fuji, I can pretty much dial in the look I want on the camera using their film simulations and then just transfer them to my PC and upload them. This was still a bit of a pain, so images would pile up for a bit on the SD card until I dumped them all and uploaded them to Google Photos, my image-sharing service of choice.

I used the X-T1 exclusively for over a year—and I loved so much about it—but decided on an upgrade, for two reasons:

  1. The autofocus on the X-T1 is pretty dated, especially in lower light conditions (like our house). Since I mostly photograph a toddler that moves near the speed of light, I needed something a bit quicker.

  2. For some reason, this camera does some skin smoothing effects in the JPEG processing pipeline at higher ISOs (low light). This isn’t something you can disable, so I would have to go and process the RAW photos for low-light situations.

The stars aligned when Fuji released the X100VI in February, and I put in a pre-order a mere hour after the initial release. Due to the camera’s popularity, it still took over six months to arrive, but it’s here now and I’ve been extremely happy with it. It’s a definite step up from the X-T1 in terms of autofocus, I can disable the skin smoothing, and it still has all the great color science and custom film simulations that I loved about the X-T1.


DSCF9608.jpg

With the X100VI in hand, I had further simplified my image processing workflow (to no processing at all). I still had to transfer the photos off my SD card using my PC and then upload them to Google Photos. It might sound ridiculous, but I’m not the only user of the camera, and it’s not infrequent that Alex wants to take a photo and share it that day, not after I get around to uploading.

While waiting the six months for my camera to arrive, I was exploring the camera features and manual and the frame.io integration caught my eye. This feature allows the camera to automatically upload the photos of your choosing (or all of them) to your frame.io account over wifi. This worked pretty well in practice; and I used this functionality for about a month, or until my trial period was up.

I wasn’t a huge fan of the frame.io interface: it meant our library was bifurcated between services, and I generally try to avoid adobe products, so I was a bit sad to see that the camera didn’t support any alternative services for this service. Having the camera auto-upload immediately after taking photos was such a perfect finish to my workflow optimization.


DSCF9093.jpg

Taking my data back

The initial idea was pretty simple: frame.io offered a free account option that lets you upload up to 2GB of photos and videos, and they also offered an API. I used their webhook support to have their service call my server whenever an asset was uploaded, at which point I would download the original files, delete from their service, and upload to google photos.

This service took about a day to implement and worked surprisingly well. If you watched the frame.io site while the camera was uploading, you would see an asset appear for under a second before being removed and uploaded to google photos.

I called this small service 🪳 roach, and I was pretty happy with it. Images would take a while to show up on the frame.io site, as it does a bunch of processing on the photos as they arrive, but it was still so much quicker than my old workflow.

There were two unresolved complaints in my mind at this point:

  1. My photos, including photos of my family, were still going to a third-party site, and I am not entirely confident that they are good wardens of my data. You can make the same argument against google photos, but at least that’s the devil I know.

  2. I wasn’t able to upload movies taken on the camera due to the 2GB project limit, as a single movie is frequently larger than this limit.

DSCF8999.jpg

RITM: 🪳 in the middle

While monitoring some uploads on my camera, I came across an intriguing menu item under the camera’s frame.io menus titled ROOT CERTIFICATE. This immediately caught my attention as I love a good RITM attack.

A picture of the camera allowing me to MITM it and frame.io
I’m not entirely sure why the camera lets you choose a root certificate to use with this service, but I’m very happy it does. Thanks, Fuji engineers!

I hurriedly set up mitmproxy, loaded the CA certificate that it generated onto the camera, and hijacked the DNS for api.frame.io on my local network. I was able to capture the traffic between the camera and the frame.io servers, but as soon as the camera attempted to upload a photo, it would disconnect. It turned out that frame.io uses S3 presigned URLs for its storage backend, and the camera will exclusively verify the certificates of these requests with the loaded root certificate, so it was just a matter of me doing the same MITM on the s3 url before I had the entire flow captured.

It turned out that doing the MITM was entirely unnecessary as frame.io helpfully documents the entire C2C integration on their site, which is what the camera was using.

Using this documentation, I rewrote roach to instead be a local service that listens on my network and emulates the frame.io API. As it’s local only, I got to skip some complications around handling device authorization or pesky things like basic authentication tokens, and I found that I could just immediately authorize the camera when it connects[1].

My initial thought was to use minio as my local s3 backend as I could return the camera the multipart presigned URLs for uploads and let it combine things, but this proved to be problematic as the camera clearly expects some particular details from the AWS S3 URLs, so it was failing to upload. In the end, I just handle the uploads from the same web service, using the same URL structure as AWS S3 but ignoring most of the query parameters aside from the asset ID and part number.

Performance doesn’t really matter here as the camera uploads a single ~25MiB part at a time, limited by the networking stack of the camera, so any time spent processing and stitching together the photos is inconsequential.


DSCF9507.jpg

In the end, this real fake frame.io service I wrote is my favorite solution. I plan on expanding support for this to have plugins based on the file types to further improve my workflow, for example:

  • save RAW files to local storage instead of uploading them to google photos

  • transfer the movies to my computer for editing

  • transcode movies before upload to google photos which doesn’t like it when I use flog2

The one downside to this approach is that I can’t hijack DNS wherever I go, so the wifi uploading breaks once I leave my network. I can fall back to my cloud-based roach service in these cases, but I need to remove the root certificate from the camera before doing this.

An alternative (that I haven’t attempted yet) is to use my phone as a hotspot, where I can use tailscale to use my home services to hijack the DNS.


DSCF9602.jpg

This entire project was one of my favorite ones of recent memory: not only do I get to solve a real technical problem I have with some software but I got to do some hacky stuff along the way. If you’re interested in exploring either of these options for your X100VI (or any camera that uses frame.io C2C integration, probably), feel free to reach out to me with any questions or if you want a copy of the services I wrote for this.

DSCF9110.jpg