Researchers are exploring using AI to prevent toxic content before it's posted online by prompting users as they type messages.
Users appreciated the concept of receiving alerts about potentially harmful language but had concerns about privacy and disruptions to natural conversation flow.
Implementing proactive measures like AI-based content moderation prompts not only eases the burden on moderation systems but also enhances the quality of online interactions by promoting empathy and understanding.
Google's Jigsaw Perspective API uses AI to encourage positive interaction online, not just filter negativity.
AI tools are being developed to evaluate online comments for qualities like reasoning and empathy, promoting healthier and less polarized discussions.
By incorporating 'bridging attributes' in AI classifiers, efforts are made to increase mutual understanding and trust across different perspectives in online interactions.
The Council on Tech and Social Cohesion is focused on incentivizing technology to promote trust and collaboration, rather than division and conflict.
The revamped Steering Committee consists of diverse experts working at the intersection of technology and social cohesion, driving initiatives like design codes, elections integrity best practices, and digital peacebuilding efforts.
The Council is working on multiple fronts such as public policy, funding, establishing metrics, scientific evolution, and implementation to drive the adoption of prosocial technology and mitigate harms.
Imagine the potential of AI mediators to assist in conflict resolution alongside human mediators, offering objective perspectives and solutions.
Digital technologies have the power to enhance inclusion in mediation and peace processes by addressing barriers like distance, language needs, and limited access to information.
Social media analytics and digital technologies are increasingly being integrated into peace agreements to address harmful social media content and amplify voices for peace.
Technology governance often focuses on harmful digital content, but there is a need to shift focus towards the design of technology to address harmful content creation incentives.
It is crucial to move beyond content governance and prioritize tech design governance to encourage prosocial behavior and diminish harmful actions on tech platforms.
Prosocial tech design governance entails incentivizing and regulating tech products to amplify positive behaviors, emphasizing the importance of tech designs in shaping human behavior.
Deliberative technology, enhanced by AI, can foster inclusive public discourse by bringing together diverse perspectives to tackle shared challenges.
Deliberative technologies enable dynamic exchanges that go beyond traditional polls, allowing participants to refine solutions collaboratively.
The integration of AI in deliberative tech not only streamlines processes but also amplifies democratic participation, navigates polarization, and reveals common ground for more effective solutions.
Social media platforms are not well-prepared for the upcoming elections, scoring below 62% in terms of election readiness.
Many platforms lack policies to stop the spread of manipulated content like deepfakes and to prevent micro-targeting of AI-generated political ads.
There is a lack of transparency regarding platforms' performance, enforcement of policies, and safety teams, raising concerns about their effectiveness in maintaining election integrity.
Platforms should consider setting rate limits and circuit breakers to increase participation of representative voices and decrease exposure to divisive content.
Design changes should prioritize quality-based content rankers over engagement-based ones to promote positive user experiences and minimize exposure to harmful content.
Promoting and amplifying authoritative content about elections can help platforms combat misinformation and build trust with users.
Legislation needs updating to address tech-fueled violence. Existing laws fail to hold tech companies accountable for harmful content they facilitate or create.
Platforms like Facebook, YouTube, and Twitch have been implicated in spreading hate and extremism. The companies' algorithms have been shown to amplify harmful content.
Section 230 of the Communications Decency Act offers broad immunity to tech companies but is outdated. There is a need to redefine accountability in light of platforms' roles in spreading online harm.
The ProSocial Ranking Challenge is offering a chance to test ranking algorithms that can improve social media content and outcomes.
The challenge will assess the impact of different algorithms on social media users' knowledge, feelings, and interactions.
The competition involves developing and testing new ranking algorithms using a browser extension to manipulate content on Facebook, Reddit, and other platforms.
Tech can bring people together and promote peace and democracy if designed with trust and collaboration in mind.
Digital tools like chatbots and AI can play a significant role in transforming conflict zones and promoting peace online and offline.
Events like the Digital Peacebuilding Expo and the Defending Democracy Symposium highlight the potential of tech to enhance societal well-being and drive positive change in our digital landscapes.
Social media companies are exploring ways beyond engagement-based ranking to ensure user safety and quality content.
Pinterest is focused on tuning AI algorithms for positivity and emotional wellbeing to improve user experiences.
Pinterest CEO Bill Ready emphasizes that prioritizing safety and emotional wellbeing can be a good business model, leading to positive changes in the industry.