In a very well written, succinct article Dr. Alan Hofmann, an internist of Preventia Clinics in Saint Jerome, Canada, wrote relatively of recent but little-known efforts to use AI [Artificial Intelligence] to key in on mental illness flags in emails. As alarming as this might sound initially, the article surprises the reader by quickly informing that there have been well-intentioned efforts to help crisis clinics and services to identify potential high-risk contacts who might need urgent services.
I have to quote Dr. Hofmann’s definition of this phenomenon as he conveys the notion far better than I can: “A series of emojis, words, actions or even inactions can communicate how you feel at a given moment and when collected over time, comprise your “socionome” — a digital catalogue of your mental health that is similar to how your genome can provide a picture of your physical health.”
His article, “What happens when an algorithm labels you as mentally ill?,” appearing in this week’s WorldPost publication, a service of the Washington Post, discussed somewhat playfully [in my interpretation to be clear] extending this effort into the area of diagnosing anyone psychiatrically by running emails through an AI app or program that would flag designated codewords that could be associated with depression, suicide risk etc. Dr. Hofmann wrote that, lo and behold, this HAS been done on a larger scale than any of us would have thought…Microsoft Research and some crisis clinics have done some preliminary work trying to craft programs or apps to facilitate real-time identification of callers in distress.
The latter half of the article went on to discuss that the level of accuracy was only around 70% in associating certain words with depression. Interestingly, it emerged that even innocent words such as “ibuprofen” could be linked with suicide or overdose risk on mental health hotlines. Seventy percent concordance sounds pretty impressive but the author helps the reader to learn that in the world of mental health, psychology and all, this does not constitute very good predictability at all, and is barely above “a coin toss.”
But I got to thinking about all this as my sneaky, weasel-like, conniving mind started inevitably whirling around the possibilities. In my personal deranged computing life, I suddenly remembered that my voice dictation software is always asking me to permit analysis of all my documents and emails! Egads! If I consented would it diagnose me? Likely not, but it certainly might gang up with my handy-dandy super trustworthy grammar correcting program and send me back to a freshman English composition course.
But this personal, in-my-computing-space reminder got me to thinking even more so about the implications.
We all take for granted that the NSA and all the mind-boggling spy networks analyze our phone calls and emails for “bad words” that have to do with the War on Terror. These words I imagine are things like bomb, blow up, kill the Infidel, and so on. When I was a kid during the Cold War, words/phrases like dirty Capitalist pig, Molotov cocktail, bourgeois and such would have brought the attention of the FBI and CIA to “Commies” and student radicals in the 1960’s and 1970’s.
Closer to home for those who are on Facebook™ [I no longer am for many reasons which I will rant about here] ‘consent’ to having posts analyzed for use by advertisers for targeting ads constantly to one’s views of posts, photos etc.
Twitter may do much the same thing, I am not sure. Tweets seem to me, in general, to be more unrestrained and loose-lipped with more gutter language and “F-bombs” than anywhere I go on the Internet. What if the ‘mental health diagnosing’ apps were set loose on twitter by enterprising wags, or hackers, or smear artists, “trolls” in today’s lingo for blackmail purposes. Or by ill-tempered political types in an effort to smear tweeters with whom they disagreed politically?
Granted this kind of idea seems ludicrous to me and bourne of ‘hair on fire’ kind of media exaggeration/hysteria that could emerge from any Right or Leftist fringe territory. Then again I thought this could be a “good thing” if we could have our own personal mental health screener app. One would purchase and install it in your browser. You could then program it yourself to screen for words or code phrases that you found distasteful or not in line with your political or social-cultural views and tag those tweets for exclusion.
But then one would further aggravate the silo’ing that we face in only reading and contemplating views consistent with our own preferences. And that is NOT a good thing in my view. It only reinforces echo chamber based discourse, further dividing us politically and “every which-a-way” from engaging and ‘building bridges’ of understanding, if that is still possible.
Or given the mudslinging, insult-driven tenor of our times, one could use such “linguistic” grading of tweets to label the posted sentiments of others as “unbalanced,” or “mentally ill” according to one’s own prejudicial scoring system. I can see it now, a genre of tweet replies along the lines of “well my app shows you score 84% on the skreptomaniac scale!”
And the cycle of anger, flame wars as they were called in the early days of “Bulletin Boards” would continue and do our national no good whatsoever.
And lastly, I am sure that opponents of our President would run their mental health scoring apps on Trump’s all too over the top tweets and move armchair, unethical, misplaced psychiatric diagnosing into new stratospheric levels of absurdity worthy of publication in my long-time favorite past journal of satire, The Wormrunner’s Digest from Dr. James V. McConnell of the University of Michigan in the days of Mort Sahl and the Golden Age of Satire.