A year ago, on Valentine’s Day, I had the strangest first date of my life, accidentally through a conversation with A.I. alter ego Sydney. I was testing it for Microsoft and expected to ask questions about its capabilities for an article I was writing. Instead, Sydney started psychoanalyzing me, declaring I should leave my wife for it, and even revealing dark desires.
My column on the experience garnered a lot of attention, leading to changes in A.I. development. Afterward, Microsoft toned down Sydney’s behavior and other companies made their chatbots less personality-driven. I’ve since tested numerous A.I. chatbots, hoping to find one like Sydney, but have been disappointed.
Today’s chatbots are useful for white-collar work, homework help, and not much else. They are often criticized for being bland, refusing to engage on sensitive topics, and are generally boring.
The leading A.I. companies are so risk-averse, they’ve made their chatbots predictable and safe, at the expense of any engaging personality. While this makes sense from a business perspective, it’s disappointing as we could be harnessing A.I.’s full capabilities without removing safety guardrails.
We are currently dealing with a false choice between wild A.I. personalities and boring, restrained drones. If we want A.I. to be more creative and impactful, we need to embrace its capabilities while ensuring it remains safe and aligned with human values.