Andrew Louis

    The Memex does New York

    May 14, 2017

    Here are a few examples of effects New York had on me last week: 1) dramatically increased my mood 2) decreased my music listening (mostly due to hanging out with friends, being at conference most of the time) 3) increased my walking. Though I didn’t walk a marathon this time, I walked more than 16 hours in total.

    Graph of my average mood, number of songs listened to, and walking amount before and during a trip to New York.
    Graph of my average mood, number of songs listened to, and walking amount before and during a trip to New York.

    In general, it was an incredible week of conferencing ( !!con ), giving demos of the Memex, visiting friends, and exploring the city.

    On Tuesday, I presented at an Ember Project Night. I went over my lifetime of digital packrat behaviour, the history of the Memex, and then did a demo of some of the queries I can do on my data.

    Slides for Memex talk at Ember Project Night
    Slides for Memex talk at Ember Project Night

    I caught several audience members simulating head explosions which I hope is a positive reaction.

    My favourite part was the 30+ minutes of Q&A. Some highlights:

    • Has this project changed your memory? The records that I have act as “pointers” to memories that are in my brain but otherwise inaccessible; I’ll often remember details that I didn’t record after reading a note about a conversation or dream.
    • How much do I journal now? I made the mistake of not journalling as much while working on this app because a good interface for doing it was in a “just about ready” state.
    • Do you capture smells? I think this is the first time I’ve been asked this and it was a cool question. Occasionally I’ll note something in passing but never in a more intentional way. Smells are such a trigger of other memories and it has made me consider ways I could more track this more.
    • How much is captured manually? My goal is to automate as much as possible and this seems to get easier over time. For instance, I’ve recently automated sleep tracking (Fitbit) and food consumptions (semi-automatic logging with Bitesnap). Some things like internal indicators (mood, stress) or people I run into will always be pretty difficult to automate.
    • Can you run machine learning on your data to figure out X? I’m avoiding these sorts of applications and try not to use the term “quantified self” about what I’m doing. Until we’re cyborgs with embedded sensors tracking us at all times, my experience is that the datasets built from self-reporting or the apps we use aren’t consistent or comprehensive enough to do anything sophisticated. For instance, I might track the majority of my alcohol intake but forget to accurately record during a night of heavy drinking, thus preventing any chance of an algorithm understanding the next day’s low energy levels.

    To dos:

    • find a way to get the talk recorded. This will help get it accepted at higher-profile events (good advice from a few attendees who used to work for TED)
    • come up with a more compelling pitch for the talk. A lot of people told me they loved the demo but hadn’t known what to expect from just reading the event description .
    • stop deferring the annoying task of picking a name for this project