The company’s AI algorithms gave it an insatiable habit for lies và hate speech. Now the man who built them can't fix the problem.

Bạn đang xem: Once upon a time là gì


*
*
Winni Wintermeyer
*

He seemed a natural choice of subject khổng lồ me, too. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim chiến dịch in Myanmar for several years. In 20đôi mươi Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, và the conspiracy movement QAnon. All these dangerous falsehoods were metastasizing thanks to lớn the AI capabilities Quiñonero had helped build. The algorithms that underpin Facebook’s business weren’t created khổng lồ filter out what was false or inflammatory; they were designed khổng lồ make people mô tả & engage with as much content as possible by showing them things they were most likely khổng lồ be outraged or titillated by. Fixing this problem, khổng lồ me, seemed lượt thích core Responsible AI territory.

Xem thêm: Thứ 3 Tiếng Anh Là Gì - Ngày Trong Tuần Trong Tiếng Anh

I began video-calling Quiñonero regularly. I also spoke khổng lồ Facebook executives, current and former employees, industry peers, and external experts. Many spoke on condition of anonymity because they’d signed nondisclosure agreements or feared retaliation. I wanted to know: What was Quiñonero’s team doing to rein in the hate and lies on its platform?

*
Joaquin Quiñonero Candela outside his home in the Bay Area, where he lives with his wife and three kids.
*
Quiñonero started raising chickens in late 2019 as a way lớn unwind from the intensity of his job.
*

Bài viết liên quan

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *