I’ve been using a wonderful, mostly-free, open-source Mac utility called BetterDisplay, it has been a game-changer, and I recommend it super super highly. (I say mostly-free because, while most features are free, you can buy an optional pro license for $15 that unlocks everything.)
macOS is way worse than Windows 11 at managing multiple displays, and it makes me crazy. BetterDisplay is a menu bar tool that provides a bunch of features that, honestly, macOS should support out of the box, many of which Windows has had for years:
You can disconnect external displays without having to physically unplug them—great for if you, like me, have a second display that you want to share across multiple devices and only use with your Mac sometimes
If you have a display that’s high-ish resolution (like a 1440p 21-inch screen) but not 4K, BetterDisplay will not only let me enable macOS’s hi-DPI mode for that screen (with sharper text and a bunch of scaling options) but also has a slider for changing the scaling factor right there in the menu.
It also lets you change how multiple displays are arranged with a menu command—for instance, if macOS randomly decides that a screen is on the left instead of the right, you can fix it in 3 clicks
Finally, you can create “dummy” displays for use with headless (i.e. server) Macs, or whatever else you need.
Macs, to their credit, have always had solid plug-and-play support for external displays. Most users will probably never run into the kinds of problems that BetterDisplay solves, which is likely why Apple doesn’t expose any of these features themselves.
Where you’re most likely to need an app like this is if you have a monitor that’s almost high-DPI, like many gaming displays (which have lower resolution, but high refresh rates and advanced HDR) or the awesome, squarish-shaped LG DualUp. With ~140ppi, the DualUp falls below Apple’s 4K cutoff for high-DPI support, but it’s sharp enough to benefit from UI scaling. So it’s maddening that Apple doesn’t support it with this monitor out of the box, but at least with BetterDisplay there’s a workaround.
I don’t know what it was about this schoolwork-via-robot scheme that finally piqued my interest, but I signed up for an OpenAI account and started messing around in their playground app to see what it could do.
“Assassination is a complex and difficult profession,” writes GPT-3 pic.twitter.com/L2IVVtn3er
I’m reminded of one of my favorite scene-setting details in fiction, from Neil Gaiman’s Neverwhere, where he describes characters as wearing “the kind of suits that might have been made by a tailor two hundred years ago who had had a modern suit described to him but had never actually seen one.” Much of the output of these AI models, whether written or visual, is technically accurate, but still, uncanny.
Before I got into GPT-3-written texts, an Adobe friend who’s exploring DALL-E and other visual generators turned me on to Midjourney, one of the more accessible AIs because you can use it by just joining their Discord channel. If you buy a subscription you can send private DMs to their bot to generate pictures, but it’s often more fun to create stuff in public channels (which is free) so you can see what everyone else is doing.
Midjourney seems to prefer pictures of people to landscapes, and is terrible at animals. It also has limits to its understanding of pop culture, especially when words in your prompt are a lot more meaningful on their own than in context.
For example, prompts referring to “David S. Pumpkins” get images that are sort of on the right track—high foreheads, pale skin, curly black hair, as if it understands that this person shares attributes with Tom Hanks, but can’t quite make the leap that he is Tom Hanks unless I say so. And it can usually understand that “David S. Pumpkins” is a man with pumpkins on his suit, but often takes that to absurd places (and sometimes goes pumpkin-y in the wrong places, like the face.)
“David S. Pumpkins portrait in the style of Rembrandt”“Tom Hanks as David S Pumpkins”
Coming back to DaVinci and GPT-3, I had asked it to write posts about video game consoles, about explaining football to my third grader, about using football as an analogy to explain the Electoral College to my third grader.
Eventually, I wondered if DaVinci could convincingly write a blog post about what it means to be a good product manager in 2022. I gave it some prompts, cleaned up the output, then re-ran everything again and again to flesh out the details. The result was a story I posted to my Medium profile this morning:
In the world of technology, product management has always been about features. The race to add the most features and get them to market quickly has been the name of the game. But as we move into the future, this mindset is no longer going to cut it.
To be successful, product managers need to start thinking about the bigger picture. They need to be focused on creating products that solve real problems for people. They need to become customer-obsessed.
A lot of business-related content on Medium and other platforms is regurgitated wisdom, similar to what DaVinci and other models generate, it’s just that the regurgitation is done entirely by humans. These posts yield claps and follows, which in turn leads them to be recommended to new readers, who write posts in a similar vein, and the cycle continues. Here, the regurgitation is being done mostly by a computer — the algorithm eating itself.
The result looks a lot like stuff I see all the time on Medium and LinkedIn, so I expect that it may get more engagement than if I’d written what I really think about data, roadmaps, and being customer-obsessed. (Honestly, this whole digression into AI—which I included in the original post, so friends don’t think I’ve been replaced by a pod person—will probably hurt my chances of becoming a product management thought leader. Le sigh.)
Having tried to get a complete, good text out of DaVinci, I’m skeptical that models like this can create anything interesting or inspiring. But they’re surprisingly good at summarization and structure. For example, if a writer (like me) is struggling to write and work from an outline, prompting an AI with Outline for a blog post comparing the following webcams is not a bad place to start.
But how much generated text is too much, and when does it matter?
That Reddit post about AI-written schoolwork raises an obvious ethical question and a less-obvious practical one. Ethically, is it plagiarism to pass off the work of an AI model (which is drawing from seed data written by thousands or millions of other people) as your own?
And, practically, if it’s OK (even expected) to draw ideas from prior art in a paper or essay, and you’re editing the AI-generated text into its final form… is the only difference that you didn’t craft every sentence the whole time?
Even now, having gotten into Midjourney and OpenAI’s Playground tool, there’s a big difference between “writing” or “painting” and whatever these tools do, as evidenced by my fake PM thought leadership post above. The models will only ever be as good as their seed data; Midjourney seems to really like glowing halos, and landscapes that seem realistic at a distance but make no sense when you zoom in.
“hyperrealistic cyberpunk mountains, green sky, lightning, flying insects, unreal engine, high contrast, 8K” — generated by me using Midjourney“one hundred rabbits sitting in a circle around a giant carrot embedded in the ground at nightfall, unreal engine 5, 4K, realistic” — generated by me using Midjourney
My fake thought leader post was partly a lark, partly an experiment to see what happens when you give algorithms a taste of their own medicine. I’m still not sure what I think about all of this—generating content with AIs is a fun, sometimes surprising mirror on the collective output of the internet.
But I do know that we need to be careful about the stories we tell about AI, and the way we use AI to tell stories. AI is a powerful tool, but it’s not a magic bullet. We need to be thoughtful about the way we use it, and make sure that we’re using it to create the world we want to live in.
(Sorry, I suck at endings so I had DaVinci write that last bit.)
For the rest of the world experiencing Russia’s invasion of Ukraine via social media, it has been a dizzying mix of incomprehensible horror and extremely dumb posts. As social media manager Moh Kloub tweeted on Wednesday, “Twitter feels especially dystopian on nights like this. Tweets about war mixed in with sports, memes, etc., like it’s all of the same importance. Don’t think we were meant to absorb info like this.” …
… Twitter, usually the center of culture, has now become the center of the war online and Ukraine’s Twitter account has taken the popular phrase “posting through it” and given it a new darker meaning, tweeting updates as the Russian military shells the country’s major cities. One of the account’s most viral tweets yesterday was a cartoon of Hitler caressing Putin’s face, which got a lot of shares from Americans who couldn’t believe Ukraine was “shitposting” amid an invasion, which seemed to prompt the Ukraine account to post a follow up, writing, “This is not a ‘meme’, but our and your reality right now.” …
… Google removed Russia Today, the country’s main propaganda channel, from their ad tools, but their YouTube videos are still very much monetized. Russia Today’s channel has been streaming from Kyiv for days now, all while American brands appear in programmatic ads in front of the channel’s news clips blaming the west for the current crisis in Ukraine. But it’s not just Russia Today that’s streaming Russia’s invasion. Many YouTube channels are and, at least in one case, viewers in the chat keep getting mad that “it” is “taking too long”.
I LOLed at this bit:
The closest we’ve seen to some kind of big response from an American tech platform has been Facebook. The company’s head of security policy, Nathaniel Gleicher, posted a lengthy thread outlining how the platform was responding to the invasion. Facebook has set up “a Special Operations Center” to “respond in real time.” God, I wish I loved anything as much as Facebook loves setting up content moderation command centers.
Which is really to say: the internet reduces everything — everything — into fandom, whether it’s Fauci memes for or against Covid measures, or RBG memes when something happens at SCOTUS, like we’ve lost the ability to understand anything on its own terms, and can only like it or demand it be purged from the earth.
Recently, whenever I’ve felt stressed or down on myself, it’s been after a long (sometimes very long) spree of scrolling through tweets. Years ago at Brooklyn Beta, Naz Hamid said that one of his keys to a peaceful life was not to compare oneself to other people. It’s harder to follow that advice when a lot of the day is spent drinking a firehose of people’s spiciest selves.
Today — inspired by Caitlin Flanagan’s piece in The Atlantic about what Twitter has done to her brain — I’m starting a little experiment: removing the blue bird app (and the blue and pink apps from that other social company) from my phone(s) and tablet. I’m not so naive to think that I’ll totally quit Twitter, but I’m going to try to cut way back in hopes of taking back some of the brain space that internet randoms have been living in rent-free.
The grid was down, but I didn’t feel anxious; that came later. I felt elated, free. I thought of a maxim I’d once read in a book about business: A 99 percent commitment is hard; 100 percent is easy. I was 100 percent off Twitter. Which would have made an excellent tweet.
One of the most aggravating things about Twitter, especially in the last decade or so, has been that it is both incredibly toxic but also the internet’s de facto town square, or at least its high school cafeteria. I have friends who I literally only know through Twitter; to leave Twitter is to leave people. “The internet” is indeed made of people, just like Soylent Green, and it feels… anti-social to walk away from people. But it also feels wrong to stop eating chips even after eating a whole bag of chips.
For me, Twitter’s toxic quality is the desire to be seen and liked, preferably at scale. As someone who has a bit of a public profile, and would like his profile to be bigger, it seems like a sacrifice to stop participating in The Discourse. For now I’m trying to see this as an opportunity — to be more intentional about what I put on social media, and to divert more energy into this here blog.
So, instead of tweeting, I’m gonna also try sharing what thoughts, takes, photos, and links I have here on this site, possibly in the form of a daily diary with occasional topical posts. (Which I’ll probably set up to auto-post to Twitter, so people can find them without me having to look at Twitter.) I may bail on all of this, but I’m going to try to give it at least a few weeks and see if it sticks.
I probably talked for 11 minutes straight. I told her I didn’t have anything to say about climate change anymore, other than that I was not doing well, that I was miserable. “I am so unhappy right now.” I said those words. So unhappy. Fire season was not only already here, I said, but it was going to go on for at least four more months, and I didn’t know what I was going to do with myself. I didn’t know how I would stand the anxiety. I told her I felt like all I did every day was try to act normal while watching the world end, watching the lake recede from the shore, and the river film over, under the sun, an enormous and steady weight.
There’s only one thing I have to say about climate change, I said, and that’s that I want it to rain, a lot, but it’s not going to rain a lot, and since that’s the only thing I have to say and it’s not going to happen, I don’t have anything to say.
The editor said, “That’s really interesting.” It was the moment in the conversation with an editor where you have, in your rambling, hit upon the thing that they maybe haven’t heard yet, that they might want you to write about.
When she said “That’s really interesting,” I forgot for a second that I had been talking about my life, and felt instead that I had done what I set out to do. Had I Been Myself but also Made the Sale? It was what I always waited for.
On Twitter, lots of people have independently thought of the quip that this summer isn’t the hottest on record so much as thecoolestsummerfortherestofourlives. I mean, yeah, that is probably true. It’s a horrifying truth. Drought, fire, floods, polar vortexes, the crumbling of what remains of our infrastructure, misinformation, and gaslighting about these facts — that would seem to be our present and future.
It’s exhausting to think about, let alone write about, and yet we seem to lack the language to do anything more than point out the obvious. As Miller says, it may be because every persuasive, interesting thing about climate has already been written, and we’re reduced to cataloging the damage as we try to stay sane.
It seems like writing should be the easy thing to do as a quarantine project. I mean, all the ingredients every introverted writer dreams of are right there: no social obligations, mostly sequestered, possibly underemployed. But ideas meander, and while showing up to write every day does help (I’m told), the ideas won’t linger long enough to become realized in a creative work if you’re exhausted on a psychic level. Which, let’s be honest, most of us are.
I’ve experienced a sense of blankness this year, which I took a while to recognize as exhaustion. In theory I know how to have a writing discipline; I’m a word person. Shouldn’t I be writing, like, daily blog posts? Sometimes I’d look at a partial draft, recognize that there was good stuff in it, but my mind would be a total blank and a helpless despair began to roll back in.
If you’re too exhausted to do creative work, you need to find ways to fill yourself up. Yes, this is a self-care thing, but it’s more than that; when you’re in an environment where creative works are able to influence you in a passive way, your subconscious has material to work with. It’s time to take a look at what you’re feeding on.
Even now, more than a year into this, I feel this annoying need to make use of this time somehow, to fill it up and give it meaning. It’s so easy to forget that all of us are actively surviving right now, and surviving a plague is all-consuming while also feeling pretty meaningless, which I guess explains the urge to learn or create.
This idea of a personal, experiential read-only mode is really valuable; I hope it too survives the pandemic.
There’s a lot to unpack in this post by Erica Dhawan, and it took me a minute to decide which parts to excerpt. This passage, though, captures both her thesis and most of what’s wrong with it:
For organizations that are divided across generational divides between baby boomers and Gen Z, it’s beneficial to call on your geriatric millennials to help you translate the experiences of both digital adapters (baby boomers) and digital natives (Gen Z). It not only makes for a better internal culture but a happier clientele.
One geriatric millennial and head of HR, Sarah, told me that the new generation doesn’t treat video meetings in the same way they might an in-person meeting and she spends time getting them “up to speed.”
“During video meetings, I am surprised when some junior employees are not as conscious of their video background — it looks messy and unprofessional to me,” she says. Knowing that experienced (and older) team members are accustomed to more formality, even when they’re working from home, she now reminds her younger team members to fix their backgrounds on customer calls and wear clothing that they’d wear to the office. It signals respect, not only to clients but to other colleagues as well. On internal calls, she lets it go, adding, “We have to be willing to understand formality discomforts across channels and be comfortable being uncomfortable.”
“Comfort” is an interesting concept to anchor on. The generation gap Dhawan’s concerned with seems to be the one between Boomers and younger workers; the idea that ‘geriatric’ millennials can help bridge the gap seems to stem from them knowing more about what older bosses and owners expect, and being able to teach new junior employees about workplace decorum. But this kinda seems to assume that the Boomers’ comfort should be a priority, whether they’re clients or internal, not the growing number of younger workers who’ve spent the last year working from home during a pandemic.
But does she even know which generations she’s writing about? Take this section:
Adette, a geriatric millennial and the CEO of Tinsel and In Wild Pursuit, addressed this problem on her team. She once hired a sales coach to grow her company. In his late forties, the coach was a digital adapter who kept pushing Adette’s team “to hit the phones and annoy and pester your prospects for meetings.” Adette remained skeptical, especially since she knew that her clients (most of whom were in their thirties) preferred texting, and in all likelihood ignored phone calls.
Someone in their late forties is not a boomer. That is a Gen Xer, and a younger one at that. It’s true that many Gen Xers entered the workforce before texting, chat rooms, and video calls were the norm, but merely pushing people to use telephones — which are still a preferred mode of communication for millions of people, including/especially at work — is not a sign of a ‘digital adapter.’ Rather, this seems like a much simpler case of someone who failed to listen and understand their market. Arrogance, sadly, spans generations. And while it may be problematic for a forty-something ‘sales coach’ to not know that millennials hate phones, it’s equally problematic for a leader to assume that that’s because the person’s age makes them a “digital adapter.”
Lastly, the term ‘geriatric millennial’ can be cute when used once, but as a serious label reflects a serious lack of understanding of what that word means. ‘Geriatric’ doesn’t just mean ‘old’ or ‘hecka old’ — it means ‘decrepit’, ‘outdated’, and in the medical context is used to refer to specialized care for older patients. Forty year olds (like me) are in no way geriatric; in fact, most of us are just hitting our career prime. It’s true that we’re well situated to help coach and lead younger millennials and Gen Z teammates, because we’ve seen some shit. It’s also true that there are more and more of us in C-suites and executive teams, especially in startups and smaller firms. But if a 40-year-old is geriatric, what does that say about someone who’s 50, 55, 60, and still in the workforce?
This is especially annoying because we have terms to describe ‘cusp’ millennials born between 1977 and 1983 — we’re xennials, also known as the Oregon Trail generation or (for the My So-Called Life stans) Generation Catalano. The phrase ‘geriatric millennial’ manages to communicate the same concept in more words, while also being borderline offensive?