negativesign
archive colophon elsewhere
andrew catellier
  • Modeling a Desk Shelf Using Onshape for iPad

    September 04 2023

    Having watched an absurd amount of Home Office Youtube, I was thinking about how I could improve my office. I wanted a desk shelf, but they seemed really expensive for what they were. And because I want to do fascinating, fun, and weird things, I thought it might be fun to build one with my Dad. In true Leeroy Jenkins fashion, I decided I was going to build a model and make drawings of a desk shelf in some CAD program. But on an iPad. Because.

    In the past I avoided CAD and fabrication projects because I was being really stupid1 and refused to learn a new thing. But CAD is WAY more approachable than I thought it would be.2 The fundamental concepts aren’t that hard to learn if you give yourself the space to do so.

    I decided to try Onshape because I could sign up for a free account and they had an iPad app. I installed the app on my iPad and started making sketches, extruding, intersecting, assembling, and exploding.

    Overall the experience of building models was not too cumbersome. I didn’t have my keyboard attached, so there was a lot of menu juggling. The iPad app does in fact have keyboard shortcuts, but part of the appeal for me was using the iPad in tablet mode. I enjoyed being able to manipulate the model with my fingers and making fine adjustments and selections with my pencil.

    The models turned out well! But when building things with wood, you build them from drawings, not models. Unfortunately, it’s not possible to generate drawings on the iPad—they must currently be generated using the web interface. Bummer! Generating the drawings on my 2017 MacBook Pro was a little painful due to UI input lag, but that’s an old computer and this is full-blown CAD in a browser. Once completed, though, it’s easy to generate PDFs of the drawings and then share them.

    I like the idea that anybody could create an Onshape account and collaborate on a project together. One limitation of a free Onshape account is that all of your projects are “public.” I say “public” because if you want to view somebody else’s project, you must have an account. Links to models are not accessible unless you log in. I haven’t done lots of exploring on their web app, but given that restriction, I’m not totally certain how somebody would discover your account or models.

    The Onshape website talks about how they’re the fastest-growing, uh, CAD community or whatever, and I bet it’s because of their requirement to create an account to interact with their product at all. But hey, if MAU isn’t growing as much as your investors expect, you’ve gotta do something.

    Onshape was great but I’m not married to it. Next time, I’ll look for a different product and see if I can complete the entire workflow without leaving the iPad and without paying an absurd price for personal projects.

    Oh, and if you want some drawings for a desk shelf that resembles a product that retails for pert-near $400, reach out.


    1. This is always true. And if it’s not, something is very wrong. [return]
    2. That said, my only real CAD experience was with Drafix Windows CAD back in the early 90s when I was like 11? [return]


  • Momma Turkey

    September 03 2023

    A delicious cocktail inspired by a turkey parade in the fall. Invented by Shorty.

    Ingredients

    • one slice candied ginger
    • one orange
    • dash of Angostura bitters
    • 1 tsp vanilla extract
    • 1⁄2 tbsp agave syrup
    • 1 oz Benedictine & Brandy (B&B)
    • 3 oz bourbon (wild turkey?)
    • cinnamon stick
    • ice

    Preparation

    1. In a cocktail shaker, muddle candied ginger and two 1⁄2 inch strips of orange peel.
    2. Squeeze 2 tbsp of orange juice into the shaker.
    3. Add a dash of Angostura bitters, the vanilla extract, and agave syrup. Muddle it dawg.
    4. Add B&B and bourbon and stir 20 times or until all liquid ingredients have sufficiently mixed.
    5. Add ice. Stir 30-45 seconds.
    6. Use a lighter to ignite the cinnamon stick until smoking and cover with serving glass until the cinnamon stick stops smoking.
    7. Use a strainer to pour the shaker into the serving glass and garnish with orange peel.


  • Sorry for Blogging

    September 02 2023

    I came back to ye olde static site generator after three years and I think everything is still working. Not only that, the contraption I set up to post from my iPad still works too. It’s a miracle. At least until I have to update go.mod or whatever. Or I realize that my VPC got backdoored and has secretly been mining crypto without my knowledge.

    Anyway, Hugo is fine I guess. I chose it because I wanted to publish Jupyter notebooks but then I proceeded to post about running JupyterHub in an NVIDIA container and restarting a computer into a different OS using grub and some other things that are utterly useless now, 4-5 years later, but resulted in some inbound requests to write for other people’s blogs? No Jupyter notebooks though.

    Also there were the yearly summaries. I love doing those, but I sort of preconditioned doing them on the completion of publishing daily photos for that year, and I am now three years behind on that.

    Wha happen? Well, from some time in 2018 until earlier this year, I had two simultaneous jobs. Also there were small things like the pandemic, the Cameron Peak fire, family health issues, I got a new day job, and some other events. Most of whatever time was left went into Job #2, where I built on top of this cool neural network audio quality predictor thingy and documented it in what I really hope will be published as a journal article soon.

    While all that was happening, I watched people prepare themselves to fly in space, build actual rockets from scratch, build an entire video streaming platform, power their tesla using turbine engines (??), and all kinds of other fascinating, fun, and weird things1. It made me think about all the things I’ve wanted build, write, and make. I wanna do fascinating, fun, and weird things too! And I guess I want people to know about it, which is weird, but you know.

    So here I am again.

    Please like and subscribe and tell me what weird thing I should build in the comments below.


    1. Did you know there’s an entire cottage industry of people who make videos to sell home office products?? And a lawyer who only posts videos about lock picking? [return]


  • The Ground Beneath my Feet and the Sky Above my Head

    December 20 2020

    the ground beneath my feet and the sky above my head

    I’ve been trying to take and publish at least one photo for every day since 2010. Ten years, roughly, as of December 2019. I’ve published this diary on my website, and I’ve written about the project at least once before. From my first selfie with (taken with a Canon T2i) to the last photo of 2019 (taken with a Leica M10!), I published 8,564 photos for this project—an average of 2.3 photos per day.

    Generating this document that chronicles ten years of my life has certainly been some kind of accomplishment. I sometimes struggle, though, to understand why I’ve continued to build this record. Partially, I admire the work others have done in this area and want to make something even a small fraction as interesting. Partially, I’m vain. Partially, it’s an instinct I can’t shake.

    This record of my life is both a memory aide and a memory censor. The photos I take and any text I write to accompany them necessarily highlight certain events at the expense of others. Even if the images I publish jog my memory and bring undocumented events to my mind, that historical context vanishes at the interface of my consciousness and the world around me.

    I suppose every person processes this existential tautology in their own way. Personally, I can’t yet improve on the text I wrote to accompany this image five years ago:

    inner storm

    inner storm

    the idea that the human body and mind are in some sense a temporal reflection is beautiful, perplexing and terrifying to me.

    it’s disappointing how poorly we parse these reflections. to some extent we can forgive ourselves—the reflection itself is a hopelessly incomplete representation.

    even so, i can’t help but wonder if we are getting better, or if we will.

    The photo processing speedrun I recently completed for 2019 reinforced this notion. I published almost 1,700 photos for 2019, but in many cases there was more than a year between taking a photo and publishing it. As my mind time-traveled through 2020 it somehow managed to forget parts of 2019 and ignore other parts. When I was finally able to process and publish photos from 2019 I remembered things that happened, remembered thoughts that I had, and realized just how much I had forgotten. No documentation effort, no matter how thorough, can effectively communicate the fullness of one’s life experience. Even to oneself.

    In the 2018 Ignite Boulder talk that I gave on this very topic I touched on the futility of documentation projects like this. I said this futility is both merciful and cruel. Good, bad, and mundane are all placed on the ambivalent altar of entropy.

    But I’m still doing it and I plan to continue. It’s worth the time and effort. Taking photos forces me to participate in the present. It makes me consider what I’ll remember. Perhaps most importantly though, it gives the gift of hindsight and forces me to confront my actions in a context outside of my present understanding.

    nocturnal

    sometimes i think it would be nice to have some cognizance of the implications of any given passing moment, but in those instants i remind myself that fully understanding the present is an act of theft against my future self.

    “If you can not measure it, you can not improve it.”

    May I never stop measuring.



  • 2019 in Review

    December 13 2020

    Another year, another year-end-review, I always say. Thankful to have an opportunity to reflect on another trip around the sun. You may think it’s odd to write a review of 2019 in December of 2020, but if y’all have 2020ed at all, you understand how remembering 2019 could be therapeutic.

    If you asked me, my friends, my family, or anyone adjacent to my life, you’d know that the main theme of my 2019 was work. My day job has been as intense as ever. We’re a small company and being a small company comes with growing pains. But we have accomplished so much more than you’d expect for a 3-4 person engineering team.

    • We built, tested, and deployed three (!) sensor packages. These sensor packages imaged thousands of acres at a ridiculously high resolution. Hardware is hard. Remotely debugging hardware is hard. I learned so much.
    • I automated the GNSS post-processing workflow because I didn’t know any better. It took a huge amount of work, but the workflow is finally relatively stable and robust.
    • I, again, did tons of sysadmin and IT work. Tons of it.
    • I got to work on a grant awarded by a prestigious government agency.

    While going through my 2019 work journal, I realized how little work I did that relates to my official title (and my career goals) and I’m frustrated about that. Even so, I encountered challenges and solved them because they needed to be solved. The passing of time has revealed just how difficult some of those challenges actually were. Many days I completed a legitimately absurd number of tasks. Often, I would go on to put in a couple hours for my second job as well. I did, however, manage to get set up to work from home—this has turned out to be important.

    The majority of my spare time was spent on my side hustle with the goals of attending ICASSP 2020 in Barcelona and trying to grow the old savings account as much as possible. In the process, and with help from my esteemed colleague, I built an accurate no-reference speech quality estimator—a thing that’s been sort of a holy grail in the speech processing field for decades. If you asked me whether or not this was possible even 5 years ago, I would have said, “no”. In our second iteration reduced the number of parameters in the model by >99% and improved robustness. We wrote and submitted two papers, and the second ended up being accepted to ICASSP 2020 (mission accomplished!). I am proud of this, full stop. I am proud I was able to accomplish this in my spare time.

    Even though I spent so much time working, I really think I managed to wring everything I possibly could have out of 2019. When I wasn’t working, I:

    • Started editing my photos on an iPad
    • Spent quality time on the mountain
      • Ate amazing food, tolerated the dog, stared at the clouds
      • Saw the first signs of Spring
      • Had hammock and burrito time
      • Got socked in
      • Basked in the green
      • Smelled the waldflaurs
      • Ogled the clouds
      • Enjoyed the snow
    • Got some workouts in until my gym closed 😪
    • Saw Antonio Sanchez (and co.) perform at the Boulder Theater
    • Went on a few hikes
      • Palisade Mountain
      • Storm Mountain
      • Got soaked on Crosier Mountain
      • Ouray
      • Telluride
      • Utah
    • Spent time with friends
      • Went to Chicago to visit Steven and Jeena
      • Hiked around Crosier mountain
      • A going away party for Mark
      • Dylan’s birthday party
      • Drinks with Dustin and Carly
      • Brunch with the Denver crew
      • Bike rides around Boulder with Brian
    • Spent time with family
      • Went to Steamboat with my Dad a few times
      • Celebrated the 4th of July
      • Mountain time with Christina’s mom
      • Built a retaining wall
      • Went to Utah to see my sister and brother-in-law for Thanksgiving
      • Home to Cheyenne for Christmas
    • Watched Christina give a talk about her work
    • Went to eyeo for the second time—extreme spiritual refreshment
    • Managed to write four blog posts
    • Printed out all my 2018 averages
    • Ate some amazing food, drank delicous drinks
      • At Mateo (mmm, fries)
      • Many delicious meals made by Christina, but specifically these biscuits and gravy
      • Pizza at Backcountry
      • Work meeting with Charles at Burn’s Pub
    • Documented some construction and change in Boulder
    • Attended at least 20 meetups
      • Work in Progress
      • Denver Creative Tech
      • B.E.A.T.
      • Analyze Boulder
      • Boulder Python
      • Ignite Boulder
    • Gave a talk about Docker at Boulder Python—(I am looking through this talk again and cracking myself up)
    • Got a new Nintendo
    • Attended my favorite lecture series, Mixed Taste
    • Traveled to Ouray, Telluride, Grand Junction, and Palisade and got some good eating, drinking, and mountain biking in
    • Saw the Clark Richert exhibits as many times as I could
    • Attended the Great American Beer Festival for the first time
    • Tasted some bourbon in Kentucky
    • Visited family in Ohio
    • Got a new lens for the venerable seeing machine
      • Captured some reflection selfies
    • Saw the latest Star Wars movie at least twice

    I tend to have trouble giving myself credit for the things that I do. I am truly surprised about what I managed to accomplish this year. But it is also clear that my life was out of balance. I intend to rebalance in the future.

    I am disappointed in myself, though, for not taking stock of 2019 sooner. I made several realizations while writing this post that I wish I had made earlier. Realizations that would have affected decisions I made during 2020, now all but spent. I’m setting a goal for myself to process all my 2020 photos and write a post for 2020 by February 2021. Wish me luck.

    Finally, 2019 was my tenth year of attempting to capture and process at least one photo every day. I have some things to say about this and they merit a separate post that I will write some time in the future. Suffice it to say, though, that I am pleased to have this 10-year-long record of my life despite how much effort is required to create it. Perhaps my photography not quite as good as it should be given I’ve been practicing for 10 years. But looking through all my 2019 photos I’m pleased with their quality.

    On to 2020.



  • What is it that you do here.

    May 06 2020

    I don’t usually talk about the work I do (in my free time, 😅) but If you wanna nerd out for a bit, here are some of MY OWN THOUGHTS about some PUBLICLY-AVAILABLE RESEARCH that…just happens to have my name on it.

    The specific topic area is speech quality measurement. What does that mean? Well, you’re probably familiar with the following statements:

    “[voice service provider] sux0rz/always sounds good.”

    “[telephone company] never/always has good service.”

    “My [smartphone device] drops calls here.”

    “This building has better/worse service over here.”

    You’ve said these things, I’ve said these things, we all scream for [reliable voice communication “system”].

    Speech quality measurement is the practice of quantifying how you perceive the quality of sound that’s produced by a communication device/service when it enters your earhole.

    Think about the last phone call or video conference you had. Was there a time where you couldn’t understand the other person? A time when it sounded like the other person was talking into a Pringle’s tube lined with dirty socks? How did you feel about that? How did you, a human, take all the annoyances, missed words, robot noises, muffled sentences, and background noise and convert it into an overall opinion? Turns out there’s an enormous body of research dedicated to measuring and predicting your opinion when those things happen.

    The people interested in predicting your opinion can count dropped calls, measure a radio receiver’s signal-to-nose ratio (SNR), and log bandwidth usage. Some measurements have predictable impacts on what you think about [product] or even [location]. Nobody enjoys dropped calls, but I’m guessing you don’t know what SNR your phone’s radio managed for your last 5 phone calls. How low must that SNR be for you to give up on a call? In many cases SNR constrains the available network bandwidth to a certain bit rate. But do you know the bit rate threshold that causes you to hang up and try to get a better connection?

    It’s true that people can learn or predict relationships between these physical/software metrics and audio quality, but linking them to human behavior or opinion is Tricky. Because each human is Different. And human behavior is not Deterministic. Remember the two D’s of humans. Er, DnD. Different and non-Deterministic, DanD. Humans are DanD, I always say. At some point, though, humans will vote with their wallet (and/or attention 😢) and stop using a service that sux0rz. If [service provider] wants to stay in business, [service provider] needs to understand if their “system” does indeed sux0rz.

    Human-Based Measurement Techniques

    In order to figure that out, people can, and do, ask you for your opinion on the quality of [simulated voice call]. They ask, in a strictly controlled laboratory environment, how would you rate the quality of [simulated voice call] on a scale of 1-5? They pose this question to dozens or hundreds of people and they can come up with a good estimate of how most people will react to [simulated voice call].

    Strictly controlled environments are expensive and so is getting you to come to them. Besides that, you make phone calls on the streets, in your car, at the clurb, and on the john. Lab tests are informative, but not representative. So some people will ask for your opinion in a “real-world” situation, like right after you call your therapist. Maybe you’ll answer, and maybe you won’t. But if they can gather enough opinions, they can overcome the lack of control and learn something about how well their “system” is performing. This approach is less expensive than a lab test, but it has lots of caveats. Your opinion can be affected by things other than the aural quality of the call, such as your therapist’s unhealthy interest in your naked math test dreams. It is possible to understand and handle these caveats, but doing so adds cost and uncertainty.

    Computer-Based Measurement Techniques

    Maybe by now you’re asking if we can teach computers to predict your opinion of a call. Well, yes. Yes we can. There are many approaches to accomplish this and here are three:

    1. Analyzing Metadata: making predictions from call and “system” metadata
    2. Comparing to Reference: analyzing the audio put into the “system” and comparing it to the audio output from the “system”
    3. Output Analysis: analyzing solely the output from the “system” that goes into your earhole.

    Over the years, people have had success with the first two approaches but each have caveats. The third has been a sort of holy grail—at least until the last few years.

    Analyzing Metadata

    Using call metadata (SNR, bit rate, call length, or other parameters) is powerful but the “system” is complicated—in some cases it’s comprised of two different handsets, two separate radio links, two separate humans, and a network that connects them, all with their own time-varying characteristics. Metadata is not always rich enough to characterize the interactions among all these parts.

    Comparing to Reference

    Comparing the output audio to the input audio has significant advantages compared to a metadata-only approach. One advantage has more to do with Claude Shannon than you’re comfortable with and another is that this approach fully captures the dynamics of the “system”. But [voice service provider] doesn’t have access to the unfettered input! They are not following you on the streets, riding shotgun in your car, clinging to your face at the clurb, or recording your poops. By the time [voice service provider] receives the “input” it has already traversed almost half of the “system.” This effectively limits use of this approach to the laboratory. Still valuable! Until recently, this has been the most accurate approach. But it’s not deployable.

    Output Analysis

    This method is essentially how humans form opinions of distorted audio. You have an idea about what it sounds like when Bob from work is standing next to you and droning on about his exploits during college: it sounds good, despite the content. You also know know what it sounds like when he’s calling you from a crowded underground grotto: bad. But how do you tell a computer how human speech should or shouldn’t sound? It’s haaaaaarrrrrrd!

    If we could do that though, this approach could theoretically be deployed anywhere in the “system”. The ability to know, for example, that your opinion of the audio is high until it arrives at the cell tower down the road is a great debugging tool. It could be very helpful for [service provider] to pinpoint problem areas in the “system.”

    So is it possible to convey that information to a computer? Well, a colleague and I have been working on this for a while, and we came up with one method.

    This particular method is…pretty, pretty good. It’s been tested on tons of data: our dataset includes 13 languages, 1,230 talkers, and contains more than 330 hours of speech after augmentation. Our model performs well: predictions correlate with truth data very well (r > 0.91), root-mean-squared errors (RMSE) are around 9% of the target value scale. Our model contains a relatively small number of parameters. Depending on how you train it the model can predict quality or intelligibility. Or some other thing! As long as you have the data. Not gonna lie—it’s one of, if not the best out there right now.

    This method doesn’t understand anything about the words present in a given audio signal. Or who is saying it, or what language they’re speaking. The very properties that make this method robust to widely-varied input also just happen to not undermine your privacy.

    All that said, I’m proud that the reference implementation is available for free. The reference implementation includes four pre-trained models. Anybody can use it—the rising tide lifts your mom or whatever. The work was published at ICASSP 2020, and at time of publishing, it’s still possible to register and watch the talk.

    Anyway, just a little about what I, MYSELF, ONE HUMAN PERSON, REPRESENTING HIS OWN SELF AND DEFINITELY NOT ANY GOVERNMENT ENTITIES IN THIS POST have been thinking IN EXTREMELY VAGUE TERMS about lately.



  • Adjusting External Monitor Brightness on Your Mac

    April 18 2020

    I’ve been lucky to be able to work from home during this…crisis…and I’ve been using 15” MacBook Pro and a Dell 4K monitor for a second screen. It’s a nice setup—again, I’m really lucky and thankful—all the extra screen real estate is so nice for writing code and viewing images in QGIS all at the same time.

    One thing that was less than optimal was changing the brightness of the Dell monitor throughout the day. The OSD menu is actually implemented quite well but it’s still clunky compared to changing the brightness on my laptop’s built-in display.

    Enter Monitor Control (hat tip Reid Beels). This is such a great utility. After you launch it, pushing the monitor brightness buttons on your keyboard automatically adjusts the brightness of the monitor that’s displaying the active application.

    Highly recommended if you’ve got a non-Apple display connected to your Mac. Thanks much to all the developers.



  • Navigating the Photo Metadata Wasteland

    March 31 2020

    I’ve been rewriting my multiple exposure software (again—the code was totally unmaintainable and totally horrible). I decided to switch from exiv2 to the excellent exiftool and pyexiftool for reading and writing image metadata. exiv2 is good! But exiftool is compatible with raw images from more camera systems.

    I didn’t have any tests so I failed to maintain the functionality of my old metadata writing tool. I couldn’t figure out why neither Lightroom nor Flickr could read the “Capture Time” I was attempting to set in metadata. I searched around and found this forum post where I learned that Lightroom writes one EXIF tag, two IPTC tags, and one XMP tag that all include the “Capture Time” metadata.

    Let’s unpack that just a tiny bit: there are three different metadata standards and one of them splits the date and the time into two separate items. Why!

    🤬

    The one Flickr pays attention to is EXIF:DateTimeOriginal. I had been writing Composite:DateTimeCreated, which apparently means something to exiv2 but nothing to exiftool. Maybe the exiv2 method writes all four tags like Lightroom but I’m too lazy to check.

    I also had some trouble making newlines in captions—ultimately because of how pyexiftool wraps the exiftool CLI and how the exiftool CLI uses line feeds—but it is possible if you use a carriage return instead of a line feed.

    If EXIF orientation handling is a ghetto, it’s a small part of the tri-standard urban wasteland that is photo metadata. Documentation is sparse and hard to come by, but thankfully the exiftool forum has been an oasis.



  • Use GRUB to Boot From Ubuntu Into a Different OS

    March 14 2019

    Yesterday I was working from home and needed to boot my Ubuntu-based office computer into Windows. I really didn’t want to brave the bomb cyclone (again) so I did some quick searching to see if I could do this remotely—without manually interacting with the EFI boot screen.

    Surely you should be able to tell GRUB which OS to start on next boot, right? Right. Here’s how.

    First, use this one-liner to get the name of the GRUB menu item you want to run on next boot:

    # awk -F\' '/^menuentry / {print $2}' /boot/grub/grub.cfg|cat -n|awk '{print $1-1,$1="",$0}'
    0   Ubuntu
    1   Windows Boot Manager (on /dev/nvme0n1p1)
    2   System setup
    

    Since I wanted to boot into Windows, I typed:1

     # sudo grub-reboot "Windows Boot Manager (on /dev/nvme0n1p1)"
    

    Then, reboot the machine.

    # sudo reboot
    

    Nice! The machine should now boot into Windows.

    💪💪💪💪


    1. You should be able to just specify the number of the GRUB option you’d like to select, but I was having trouble with that and couldn’t figure out why. I do know that for me, specifying both the grub-reboot and the reboot command on the same line wasn’t working. [return]


  • 2018 in Review

    January 13 2019

    Last year I wrote a small summary of the things I accomplished throughout 2017. I am glad I did that last year and I wanted to do it again. So. Here is another bulleted list to summarize the most recent arbitrary unit of time that I managed to traverse.

    I didn’t switch careers, there were no weddings, and I maybe worked more than I should have. But I did make a concerted effort to push my creative/not-day-job work forward, held down a side-hustle, somehow managed to go on several vacations, and put some money away for an emergency or a self-imposed sabbatical.

    • At work I:
      • Further developed a GIS image processing pipeline that the company depends on as a first processing step
      • Contributed to an open source project
      • Evaluated imaging systems and eventually designed and integrated a custom system
      • Containerized our processing pipeline/environment
      • Developed tons of internal tools and even a couple products
      • More sysadmin and IT work than I’d care to admit
    • On top of my main job, I managed to maintain a side hustle doing deep learning
    • Took so many photos, which resulted in 333 dailies, the most so far
    • Met a former band member of The Rolling Stones because Christina is a Big Deal
    • Bought a brand new seeing machine (after almost buying an even more extravagant one)
    • Ate so much good food. Christina is extremely talented and creative in the kitchen and I am so, so lucky that she enjoys making delicious food. I was also able to enjoy some delicious meals in restaurants, including a visit to Frasca.
    • Worked to publicize my art/non-work accomplishments:
      • Entered an Ello art contest
      • Displayed some work at BMoCA’s open wall
      • Gave a presentation on my avgday project at the amazing eyeo festival
      • Submitted a photo project to the Lenswork Seeing in Sixes publication
      • Updated and redesigned my blog
      • Gave a talk to 1000+ people at Ignite Boulder
      • Submitted several talk proposals to the Boulder Python meetup and gave one of them
    • Still managed to get away from the grind:
      • Went to Portland and Seattle to enjoy the food and be tourists
      • Traveled to Minneapolis to attend the eyeo conference for the first time—my new favorite conference—where I met Teju Cole
      • Portland (again!) for xoxo
      • First occasion to spend time in Montana—Whitefish and Glacier National Park
    • Managed to get pictures of airplane shadows on two occasions (whatever I am proud of this don’t @ me)
    • Made it down to Denver for Mixed Taste in the summer
    • Saw Sylvan Esso (and see-through plastic pants) at Red Rocks
    • Mostly worked through my 35th birthday
    • Only rode the mountain bike a couple times ☹️

    All in all, an incredibly eventful year. So many things happened; it went by so quickly. I know I missed some important things. But I do think it’s good for me to take stock of the year and I’m really pleased that my dailies project—having completed its ninth year—has facilitated this so well.

    Here’s to 2019 and continuing to grow in a positive direction.



back to top

p. 1 / 6
← older

JSON | RSS
copyright © 2011-2020 Andrew Catellier thisisreal.net