The top story:
It’s likely you’ve become numb to how often you interact with technology. But what if the technology is doing more harm than good? As increasing numbers of companies rely on artificial intelligence to power their tech, we see more cases of dangerous AI dysfunction, like autopilot car crashes or content moderation bots promoting harmful social media posts. These conflicts will start to play out in the legal system in 2023, helping define the role of AI in our society going forward.
What we’re waiting for:
- Programmers filed a class action lawsuit against Microsoft and several of its subsidiaries in December for utilizing AI-generated software that borrowed from existing code on the internet without crediting the creator. According to the lawsuit, Microsoft subsidiary Copilot, which owns the AI-generated software, “ignores, violates, and removes the Licenses offered by thousands—possibly millions—of software developers, thereby accomplishing software piracy on an unprecedented scale.” A trial date has not yet been set, but the case is likely to be a landmark one and help determine the limits of AI learning.
- An AI is acting as a legal assistant in an upcoming court case for a speeding ticket. The robot will speak to the defendant through an earpiece to direct them on what to say throughout the case. The AI bot was developed by San Francisco-based startup DoNotPay, which says if the AI process doesn’t work, it will cover all potential fines. The company has not revealed the location of the case or the name of the defendant for privacy reasons.
Langdon Winner, the author of the book Autonomous Technology, describes the “chain of reciprocal dependency” —the fact that despite any autonomy disparities between humans and technology, our reliance on it will continue to grow despite experiences detailing the harmful and often irreversible consequences.
Winner says that the implementation of this technology has “repeatedly confounded our vision, our expectations, and our capacity to make intelligent judgments,” meaning our choices and arguments have changed and that the “patterns of perceptive thinking that were entirely reliable in the past now lead us systematically astray.”
In other words: autonomy is clouding our judgment, making us weaker, not stronger.
People to follow:
- Langdon Winner – Chair of Humanities and Social Sciences in the Department of Science and Technology Studies at Rensselaer Polytechnic Institute, and author of Autonomous Technology.
- Madeleine Clark Elish – Researcher on the influence of Artificial Intelligence.
- PJ Rey – Wrote a Cyborgology essay that reveals people place a substantial amount of faith in technology, surrendering control and placing autonomy on the device itself.
Companies to watch:
- Tesla – The electric auto manufacturer rolled out what it’s calling “full self-driving vehicles” last month, even though past driver-assist programs have had disastrous consequences.
- Microsoft – After Redmond released an algorithm that can write website code, developers filed a class action lawsuit because it didn’t attribute credit to those who initially developed the code. The case is at the forefront of the question of how AI tools should credit what they make (and copy).
- Facebook, Instagram, Twitter, and TikTok – Parents are suing social media for its algorithms causing young users to view harmful content.
A longshot bet:
Consumers have become so reliant on technology that despite any warnings that may arise, they will continue to use it, spend money on it, and even invest in it, because the outcome couldn’t possibly be bad, and those investing in technological developments will continue to do so, as though nothing has changed.