Amazon Apple Facebook Google Microsoft

In Pursuit of Peace and Dignity: Navigating Human Rights, Global Politics, and Digital Frontiers on Human Rights Day 2023

In Pursuit of Peace and Dignity: Navigating Human Rights, Global Politics, and Digital Frontiers on Human Rights Day 2023

“Join us on Human Rights Day 2023 for a poignant exploration of our shared humanity. Amidst the tumult of global conflicts in the Middle East, Africa, and Europe, this event delves into the relentless assault on human rights. Through gripping narratives and thought-provoking discussions, comedy, and music we aim to unveil the pain echoing across continents. Yet, within this darkness, we aim to discover glimmers of hope that illuminate a path towards a future where dignity prevails. Stand with us, embrace the urgency, and let’s collectively envision a world where human rights are not just given lip service but defended and celebrated.”

Event details:
December 10th 2023, 6 PM Pacific online at:

Musician- Pete Kronowitt

If Steve Earle threw a margarita at Elvis Costello and got pissed enough to write political tunes, they would sound like Pete Kronowitt songs. Following in the footsteps of folk singers advocating to better humanity, Pete has organized, marched and sang his way across this land. Pete founded Face the Music Collective, a guide for creative activists utilizing performances to inspire targeted individual action, and is on the board of Music Declares Emergency US, a climate music industry nonprofit with a mission to activate fans.

Franchesca Fiorentini-

American Journalist, Correspondent, activist, and stand-up comedian. Host of Newsbroke and The Bitchuation Room Podcast.

Will Durst- Acknowledged by peers and press alike as one of the premier political satirists in the country, Will Durst has patched together a comedy quilt of a career, weaving together columns, books, radio and television commentaries, acting, voice-overs, and most especially, stand up comedy, into a hilarious patchwork of outraged and outrageous common sense. His abiding motto is, “You can’t make stuff up like this.” The New York Times calls him “possibly the best political comic in the country.” Fox News agrees “he’s a great political satirist,” while the Oregonian hails him as a “hilarious stand-up journalist.

Ousman Noor studied law at SOAS: University of London, and social anthropology at the University of Oxford. He worked as a human rights barrister (lawyer) in London for 9 years, specializing in refugee and detention law, and taught as a Senior Teaching Fellow at SOAS. For 3+ years, He was Government Relations Manager at Stop Killer Robots, a coalition of 250+ NGOs from 70+ countries. Following a personal Tweet calling for an end to occupation, apartheid and ethnic cleansing in Palestine, his employment was terminated.

Kevin Welch is the president of EFF-Austin, a digital civil liberties organization founded alongside the Electronic Frontier Foundation (EFF) and continues to be a member of their Electronic Frontier Alliance (EFA). At EFF-Austin, he leads their push to educate the public and politicians about important legal and cultural issues confronting society in emerging technological spaces. He has spoken at diverse venues on these topics including at SXSW and at State Department. He is a Caltech graduate with degrees in Bioengineering and English.

Brett Wilkins is a San Francisco-based writer and activist whose work focuses on issues of war and peace, and human rights. He is a staff writer at Common Dreams.

Rev. Martin Todd Allen is an Associate Minister at the Church for the Fellowship of All Peoples. Previously, Rev. Allen worked as a prison, hospital and military Chaplain and currently works as a hospice chaplain in the South Bay. In addition, he serves on the board of directors of The Human Agenda.

Bill Budington is a long-time activist, cryptography enthusiast, and a Senior Staff Technologist on EFF’s Public Interest Technology team. His research has been featured in the The New York Times, The Los Angeles Times, The Guardian, and cited by the US Congress. He is the lead developer of Cover Your Tracks, led HTTPS Everywhere from 2015-2018, and has contributed to projects like Let’s Encrypt and SecureDrop. Bill has spoken at USENIX Enigma (2016), HOPE (2014, 2016, 2018, 2020, 2022), CCC (2017), InfoSec Southwest (2017), ShmooCon (2019, 2020), and other infosec conferences. Bill’s primary interest lies in dismantling systems of oppression, building up collaborative alternatives and, to borrow a phrase from Zapatismo, fighting for a ‘world in which many worlds fit.’ He loves hacker spaces and getting together with other techies to tinker, code, share, and build the technological commons.

Organizer, Host and Panel Discussions By:

Vahid Razavi Founded Ethics In Technology 10 years ago and is now the Founder of No Ethics In Big Tech, is the author of two books, The Age of Nepotism and Ethics in Tech and Lack Thereof. As a lifelong activist and humanitarian, he has produced hundreds of videos on various social issues, including Ethics In Technology, Silicon Valley, regional politics, poverty, war, and social injustice.

In loving memory of all our departed parents especially Parivash Gharavi.

This event is not financed, endorsed or supported in any way by any government, for-profit, or nonprofit corporation.

The event is free of charge and does not require registration. We ask if you like the content to subscribe to our channel and share the video with friends.

Amazon Apple Facebook Google Microsoft

The Responsible AI Community in Solidarity with Gaza and the Palestinian People

The Responsible AI Community in Solidarity with Gaza and the Palestinian People

As members of the Responsible AI Community, we believe that technology can serve and uplift society, but only if we are attentive to the harms and devastating impacts emergent from its development and use. We recognize the role of technology deepening and advancing discrimination, oppression, and state violence everywhere.

For these reasons, and from our full commitment to fundamental human rights, we unreservedly condemn the Israeli state’s latest violence against the Palestinian people in Gaza and the West Bank. As a community of Responsible AI practitioners, researchers, and advocates, we also condemn the use of AI-driven technologies for warmaking, in which the aim is to make the loss of human life more efficient, and the instances in which anti-Palestinian biases are perpetuated throughout AI-enabled systems.

History did not start on October 7th, 2023, but the current crisis reflects the horrific scale and extent of violence enabled by the use of AI-driven technologies. In May 2021, Israel’s army launched what it called the world’s first “AI war” against the people of Gaza. Unit 8200 of the Israeli intelligence agency has been at the forefront of using undisclosed algorithms for military targeting within the occupied territories. In July 2023, Bloomberg reported the Israeli Defense Force (IDF) further embedded AI into lethal operations to “select targets for air strikes and organize wartime logistics,” including the use of an AI recommendation system to select aerial targets. The Israeli government’s use of AI-driven technology has led to strikes against over 11,000 targets in Gaza since the latest conflict started on October 7, 2023.

The Israeli technological-military complex predates the use of AI. It is underpinned by the intrusive biometric and surveillance technologies that laid the groundwork for exponential and expedited dehumanization of Palestinians that we see today. The NSO Group, developer of the Pegasus cyberwarfare software, has been found on the phones of human rights defenders worldwide. Moreover, facial recognition technology restricts freedom of movement across checkpoints, and tools like Smart Shooter automate killing by adding AI to automatic weapons at those checkpoints. These are the very technologies purchased and deployed by law enforcement forces elsewhere in the world to police marginalized and oppressed communities.

Beyond weaponry, algorithmic systems reify and exacerbate the dehumanization of Palestinians. Seemingly innocuous technologies have been weaponized against Palestinians. Recently, Meta’s tools translated Arabic Instagram biographies from “Palestinian, alhumduallah [Praise be to God]” to “Praise be to God, Palestinian terrorists are fighting for their freedom.” Meta-owned WhatsApp created AI-generated images of brown boys with guns when given the prompt “Palestinian.” And years ago, a Palestinian man was arrested at a border checkpoint when Facebook translated his profile message of “Good morning” to “Attack them.” These moments uncover how Islamophobic and xenophobic biases are embedded in the very design of our tech-facilitated media ecosystem.

Now, this decades long, tech-enabled apartheid and occupation of the Palestinian people has culminated in a “complete siege” of Gaza – the densely populated home to 2.1 million residents, of which 1.7 million are refugees, and half of whom are newborns and children. As reiterated by genocide scholars and human rights experts, this siege, bombardment and forced expulsion is tantamount to a genocide of the Palestinian people. To date (November 13, 2023), 11,078 Palestinians in Gaza have been killed, of whom 64% are estimated to be women and children.

As members of the Responsible AI community, we must reckon with the use of AI technology to commit such scales of violence and death that we are witnessing being carried out against Palestinians. We call on our community to demand that Western governments, international bodies such as the UN, and other actors who have influence in the international community push the Israeli government for an immediate ceasefire.

We also call on the technology companies we work with, and for, to:

  • Withdraw technology support to the Israeli government and cease defense contracts with the Israeli government and military;
  • Protect their employees, such as those at Google and Amazon who have called for an end to Project Nimbus and bravely demanded a ceasefire, from retribution and harassment; and,
  • Stop the direct, and passive, automated censorship of content revealing the plights of Gazans and Palestinians on social media platforms such as X/Twitter, Facebook, and Instagram, as these recordings and posts are one of the few ways we are able to support Gazans and Palestinians from near and far.

And finally, we call on all of our colleagues within the AI and tech sectors to stand in solidarity with Palestinians and call for an end to their siege, bombardement and occupation.

Click here to become a signatory.

The letter will be updated with signatories once a day.

Listed in alphabetical order by first name. Affiliations provided for identification purposes only.

  • Aaliya Briggs
  • Abdel Aziz al-Rantisi, Empirical AI Research, Alexandria University
  • Abeer Mukhemir, Software Engineer
  • Abu Salah Taha, Gender Bias Researcher, Qatar University
  • Afroditi Sakellaropoulou, MPhil University of Cambridge
  • Afsaneh Rigot – Founder, De|Center
  • Ahmed Mohamed, QMUL, UK
  • Ahmed Yassin, Researcher, Al-Azhar University, Cairo
  • Aidan Brooks
  • Al-Hussein Abutaleb, PhD Student at the People-Centred AI Institute, University of Surrey
  • Alejandro Mayoral-Baños, Executive Director and Founder, Indigenous Friends Association
  • Alessandra Renzi, Associate Professor, Concordia University, Montreal
  • Alex Hanna, Director of Research, DAIR
  • Alexandra Mateescu (Researcher, Data & Society)
  • Alexandre Costa Barbosa, member, Homeless Workers Movement
  • Ali Abdul Rahim, University of Rochester
  • Ali Alkhatib
  • Amba Kak, Executive Director, AI Now Institute
  • Amrita Panesar, Data Scientist, Education First
  • Ana Brandusescu, PhD Candidate, McGill University
  • Anam Zahid, Teaching Fellow, Information technology university of the punjab, Pakistan
  • Aniobi Stanley Tobias (SAT)
  • Anoush Najarian, Engineering Manager and Director, Board of Directors of CMG
  • Aristides Milios, M.Sc. student, McGill University/Mila
  • Arzu Geybulla, Azerbaijan Internet Watch
  • Asem Alaa, PhD Student, Imperial College London
  • Ayah Soufan, PhD Researcher, University of Strathclyde
  • B.V. Alaka
  • Bhaskar Mitra, Principal Researcher, Microsoft Research
  • Bilawal Hameed, Senior Software Engineer at Spotify
  • Brendan Newlon, Ph.D.
  • Brian Keegan, Assistant Professor, University of Colorado Boulder
  • Brian Ng, Software Engineer
  • Brigitte Tousignant, Comms Lead, Hugging Face
  • Britney Muller, Founder and Executive Director, Data Sci 101
  • Bruno Moreschi, Early-career fellow, Collegium Helveticum, Zurich
  • Caitlin Doyle | Technical Program Manager | Great Place To Work
  • Caleb Moses, PhD Candidate, McGill University/Mila
  • Carlos Barreneche, Associate Professor, Universidad Javeriana
  • Chadapohn Chaosrikul
  • Cindy Lin, Data Product Manager, Cindy Lin Consulting and LA Tech4Good
  • Claudio Agosti, platform auditor and activist; Hermes Center, Reversing.Works, and AI Forensics
  • Dalia Hamouda, AI Product Owner
  • Damini Satija, Deputy Programme Director, Amnesty Tech
  • Daniel Whelan-Shamy | PhD Student at the Digital Media Research Centre Queensland University of Technology (QUT)
  • Daniele Metilli, University College London
  • Davor Petreski, Researcher, University of Melbourne
  • Dima Samaro, Tech & human rights Researcher
  • Dina Mikdadi
  • Divij Joshi, Doctoral Researcher
  • Donatella Della Ratta Associate Professor of Communications and Media Studies John Cabot University Rome
  • Dorothy R. Santos, Visiting Assistant Professor, UC Santa Cruz
  • Dr Abdul Karim Obeid, Data Engineer and Early Career Researcher at the Queensland University Of Technology node of the Australian Research Council’s Centre of Excellence for Automated Decision-Making & Society
  • Dr Abeba Birhane, Mozilla Foundation
  • Dr Ana Valdivia, Lecturer in AI, Government & Policy at the Oxford Internet Institute (University of Oxford)
  • Dr Iman Saleh, Data Scientist
  • Dr J. Rosenbaum
  • Dr Jessa Rogers
  • Dr Kerry McInerney, Leverhulme Centre for the Future of Intelligence
  • Dr Nakeema Stefflbauer, Founder and CEO, FrauenLoop
  • Dr Rebekah Cupitt, Birkbeck, University of London
  • Dr Syed Mustafa Ali, Lecturer in Computing, The Open University, UK
  • Dr Zahra Stardust, Postdoctoral Research Fellow, Queensland University of Technology
  • Dr. Gregory Gondwe, California State University – San Bernardino
  • Dr. Lena El-Malak, Independent Data Privacy Lawyer
  • Dr. Milagros Miceli, DAIR
  • Dr. Sasha Costanza-Chock, Associate Professor, Northeastern University
  • Dr. Timnit Gebru, Founder & Executive Director of The Distributed AI Research Institute (DAIR)
  • Dr. Tony Roberts, Digital Research Fellow, University of Sussex
  • Edgar Navarro
  • Edwina Hughes, Coordinator, Stop Killer Robots Aotearoa New Zealand Campaign
  • Ehsan Dehghan, QUT
  • Emily Keddell, Associate Professor, University of Otago
  • Emily M. Bender
  • Eric Pence, Masters student, MIT 21/22
  • Esra’a Al Shafei
  • Esraa Ali, Personalization and Recommender Systems, DCU, Ireland
  • Essam al-Da’alis, Researcher, AI4Palestine
  • Ezequiel Vijande, PhD student at UTN in Argentina
  • Fabiola Hanna, Assistant Professor of Emerging Media, The New School
  • Farya Hussain – Software/ML Engineer
  • Fawzia Zehra Kara-Isitt, Intelligent Data Analysis Group, Brunel University
  • Fernanda Bruno, Full Professor at the Federal University of Rio de Janeiro, Brazil
  • Fenwick McKelvey, Associate Professor, Concordia University
  • Frida Kiriakos, Mozilla
  • Gabriel Hope, Visiting Assistant Professor Harvey Mudd College
  • Genoveva Vargas-Solar, Principal Scientist. Signing as individual
  • Gisela Luján, director, Perú por el Desarme
  • Hailey Froese, Mozilla Foundation
  • Hanan Elmasu, Director, Mozilla Foundation
  • Hassan Yousef, AI Researcher, Qatar university
  • Hira Sheikh, PhD Candidate, Queensland University of Technology
  • Htaike Htaike Aung, Director, Myanmar Internet Project
  • Ilaria Fevola, Legal Officer, ARTICLE 19
  • Iqbal, Software Architect
  • J. Carlos Lara, Director, Derechos Digitales
  • Jac sm Kee, Co-founder, Numun Fund
  • Jackie Kay, Research Engineer, Deepmind
  • Jamie Hancock
  • Jamila Bradley, Principal, Hourglass Collaborative
  • Javad Hashemi, Data Scientist, Stitch Fix
  • Jeff Doctor, Impact Strategist, Animikii Indigenous Technology
  • Jeni Tennison, Executive Director, Connected by Data
  • Jessica de Souza
  • Joana Varon, Founder and Executive Directress at Coding Rights
  • Jonathan Moore, Principal at The Third Revolution
  • Jonathan Sterne, Professor, McGill University
  • Judith Rweyemamu
  • Julia Dressel
  • Julia Keseru
  • Julia Moreno Perri, Tech Consultant and Trainer
  • Julio Gaitán, Profesor, Universidad del Rosario
  • Junaid Qadir, Professor, Qatar University
  • Justyna Nowak, Mozilla Foundation
  • K Chmielinski
  • Kagami Rosylight, Mozilla Corporation
  • Kai Ninomiya, Software Engineer, Google
  • Karen Borchgrevink, Founder and Executive Director, LA Tech4 Good
  • Kate Sim, PhD, Oxford Internet Institute
  • Katherine Zhou, student @ University of Cambridge
  • Kathryn Henne, The Australian National University
  • Katya Abazajian
  • Khaled Mashal, Senior Research Manager, Kuwait University
  • Kimberley Paradis
  • Krystal Kauffman, Research Fellow, DAIR Institute
  • Laura Vodden, Data Scientist, QUT Digital Media Research Centre
  • Léa Yammine, Co-Director, the CeSSRA
  • Lindsay Weinberg, Clinical Assistant Professor, Purdue University
  • Liz B. Marquis, PhD Candidate at UMSI and UXR at MathWorks
  • Lori Regattieri, former Senior Fellow Trustworthy AI at Mozilla Foundation
  • Lujain Ibrahim, PhD student, Oxford Internet Institute
  • Marah Ramadan, MSc Student, Universitat Autònoma de Barcelona, Spain.
  • Marc Faddoul, Director, AI Forensics
  • Maream Abudoleh, master AI student, Istanbul Aydin University
  • Margarita Ochoa, PhD candidate, Queensland University of Technology
  • Maria Eugenia Villarreal, ECPAT/SEHLAC
  • Mariah Mendoza, Program Coordinator for Belonging, Equity, and Inclusion, Purdue University
  • Mariam Ali Issa, PhD Student, UC Irvine
  • Marius, Student, Stockholm School of Economics
  • Mark Dempsey, EU Senior Advocacy Officer, ARTICLE 19
  • Marlena Wisniak, human rights lawyer
  • Megan Bull, Software Engineer, Perceptronics Solutions Inc.
  • Michael Blake – Director, The Francis Dinh Blake Foundation
  • Michael Madaio, Research Scientist, Google Research
  • Michelle Lin, Mila – Quebec AI Institute, McGill University
  • Moe Fayez, Software Engineer
  • Mrs. Majd ismail
  • Muawiz Chaudhary, Masters, Mila
  • Nabil Alshurafa, Dr., Medical and Computing Researcher
  • Nagla Rizk, Professor, The American University in Cairo
  • Nahema Marchal, Research Scientist, Google DeepMind
  • Naomi Barnes, Dr. QUT
  • Nari Johnson, PhD Student, CMU
  • Natalie Kerby, Researcher, AI Forensics
  • Nathan Freitas, Director, Guardian Project
  • Nathan Kim, PhD student, University of Michigan School of Information
  • Neil Ballantyne, Doctoral Candidate, University of Otago, Aotearoa New Zealand
  • Nika Mahnic, PhD candidate, Queen Mary University of London
  • Nikhil Dharmaraj, MPhil in Ethics of AI, University of Cambridge
  • Nina J. Sangma, Communications Programme Coordinator, Asia Indigenous Peoples Pact
  • Nishant Subramani, PhD student at CMU
  • Paola Ricaurte, Feminist AI Research Network
  • Petra Molnar, Faculty Associate, Berkman Klein Center for Internet and Society, Harvard University
  • Peyrin Kao, Lecturer, UC Berkeley
  • Phillip Kieval, University of Cambridge
  • Prateek Waghre, Executive Director, Internet Freedom Foundation
  • Prathibha Chandra
  • Pratyusha Ria Kalluri, PhD, Stanford
  • Prima Shariff
  • Professor Marcus Foth, QUT
  • Pyrou Chung, Director, Open Development Initiative
  • Rabab Benrabah, Sales Executive. IBM
  • Rachel Coldicutt, Executive Director, Careful Trouble
  • Ramak Molavi Vasse’i, Digital Rights Advocate, The Law Technologist
  • Rayya El Zein, Director of Partnerships, Code for Science & Society
  • Reem Suleiman, Oakland Privacy Advisory Commissioner, Mozilla Foundation
  • Renata Avila, Human Rights Lawyer
  • Reshem Khan, AI Ethics Lead, non-profit/public
  • Riccardo Angius, Researcher, AI Forensics
  • Rida Qadri, Research Scientist, Google
  • Riham Samaneh
  • Robin Netzorg, PhD Student, UC Berkeley
  • Robin Vanderborght, PhD researcher, University of Antwerp
  • Ruth, Stanford University
  • Ryan Burns, Associate Professor, University of Calgary
  • Salah al-Arouri, Director, Hebron University
  • Salah al-Bardawil, Research Manager, Cairo University
  • Salvatore Romano, AI Forensics
  • Samer Hassan, Associate Professor / Faculty Associate, Universidad Complutense de Madrid / Berkman Klein Center at Harvard University
  • Sandra, Stanford University
  • Sara Marcucci, The GovLab
  • Sarah Ahmed, Technical Writer, Salesforce
  • Sarah Chander
  • Sarah Grant, MA Student, Concordia University
  • Sarah Myers West, Managing Director, AI Now Institute
  • Shahed Warreth
  • Shreya Chowdhary, PhD Student, University of Michigan School of Information
  • Shuvi Jha, CS student at Stanford
  • Simone Brugiapaglia, Assistant Professor, Concordia University
  • Soizic Pénicaud
  • Stefanie Felsberger, University of Cambridge
  • Syed Ahtesham Ul Haq, Data Analyst
  • Tamara Vukov, Associate Professor, Université de Montréal
  • Tarcizio Siva, Researcher, Desvelar
  • Tess Buckley, AI Ethics and Literacy Advisor, HumansForAI
  • Tiffiniy Cheng, Cofounder Fight for the Future
  • Tim Davies, Research Director, Connected by Data
  • Tim O’Gorman, Senior Research Scientist, Thorn
  • Tina M. Park, Ph.D., Partnership on AI
  • Tomás Dodds, Assistant Professor, Leiden University
  • Trisha Suri, cofounder at stealth startup, Carnegie Mellon alum, human
  • Txetxu Ausin, AI ethics researcher, CSIC (Spain)
  • Ulises Mejias, Professor, SUNY Oswego
  • Usman Anwar, PhD Student, University of Cambridge
  • Vahid Razavi, Founder
  • Vidushi Marda, Co-Executive Director, REAL ML
  • Vikas Dhiman, Assistant Professor, University of Maine
  • Wanda Muñoz, Feminist AI Research Network
  • Warren Armstrong, Social Impact Technologist / Director, Little Owl
  • Xiaowei R. Wang, PhD, UCLA Center on Race and Digital Justice
  • Xue Ying Tan, Software Engineer, QUT
  • Yahya Sinwar, Research Scientist, Islamic University of Gaza
  • Yomna Elsayed, Researcher
  • Zee Fryer, Senior Data Engineer
Amazon Apple Facebook Google Microsoft

Concerned Activists and Hiroshima Bombing Survivor Host Virtual Discussion Night Sunday, August 6 th to Urge Concrete Actions Towards World Peace

Concerned Activists and Hiroshima Bombing Survivor Host Virtual Discussion Night Sunday, August 6 th to Urge Concrete Actions Towards World Peace

Each year, many around the world acknowledge August 6
th as a solemn remembrance of the WWII bomb attack on civilians in Hiroshima and Nagasaki.

We are a collection of activists from the technology, business, and political spheres working for peace and civil liberties. We will host a Zoom event, “Harmony for Humanity” Sunday, August 6th, from 6pm to 8:30 pm Pacific time which is open to the public. Our
event will be focused not only on commemorating those who suffered and died due to the bombs but on current world conflicts and steps people can take, and can urge their leaders to take, to promote a peaceful world. Please sign up here.
This event is not financed, endorsed or supported in any way by any government, for-profit, or nonprofit corporation. It is 100% grassroots and supported by attendees.

This is not your Oppenheimer movie. For more information on this event visit This unique gathering brings together a diverse lineup of
talent, including four hilarious comedians, six inspiring speakers, and a captivating musician, all driven by a shared commitment to promote world peace.

Through laughter, thought-provoking discussions, and soul-stirring melodies, this event aims to create a space where the power of humor, ideas, and music converge to
facilitate important conversations surrounding nuclear disarmament and the preservation of human rights. Together we will explore the impact of the Hiroshima
bombing and its lasting consequences on both the survivors and the world at large.

Our incredible comedians will infuse the evening with laughter, using their witty and insightful humor to shed light on serious topics, break down barriers, and encourage meaningful dialogue. They will remind us that even in the face of adversity, laughter can
be a catalyst for change and an essential tool for healing.


Vahid Razavi
Founded Ethics In Technology 10 years ago and is now the founder of No Ethics In Big Tech, is the author of two books, The Age of Nepotism and Ethics in Tech and Lack
Thereof. As a lifelong activist and humanitarian, he has produced hundreds of videos on various social issues, including ethics in technology, Silicon Valley, regional politics, poverty, war, and social injustice.


Mr. Takashi Thomas Tanemori: was born in December 1937, in Hiroshima Japan. His father taught him, as the firstborn son, the Samurai Code, to guide him during many
years of searching. After surviving the bombing of Hiroshima, less than a mile from ground zero, losing his parents, and living with relatives, he emigrated to the Central Valley of California as a teenager. Along with authoring his life story in “Hiroshima: Bridge to Forgiveness” he has become a speaker for school, university, and spiritual multi-faith gatherings to share his story of Peace through Forgiveness.

Helen Jaccard : has been a crew member, public speaker, and the Project Manager of the Veterans For Peace “Golden Rule” sailboat project since 2015. She is also a
member of the Women’s International League for Peace & Freedom. Helen is an author and activist, educating the public about the environmental and cultural impacts of war, militarism, and the nuclear industry.

Norman Solomon : is an American journalist, media critic, antiwar activist, and former U.S. congressional candidate. Solomon is a longtime associate of the media watch
group Fairness & Accuracy In Reporting (FAIR). In 1997 he founded the Institute for Public Accuracy, which works to provide alternative sources for journalists, and serves
as its executive director. Solomon’s weekly column, “Media Beat”, was in national syndication from 1992 to 2009.

Dr. Dorsey : Blake serves as Presiding Minister of The Church for the Fellowship of All Peoples and Faculty Associate at the Pacific School of Religion. He is also a member of the Coordination Committee of the National Committee of Elders. Franchesca Fiorentini Correspondent and stand-up comedian. Host of Newsbroke and The Bitchuation Room Podcast. Will Durst Acknowledged by peers and press alike as one of the premier political satirists in the country,

Will Durst : has patched together a comedy quilt of a career, weaving together columns, books, radio and television commentaries, acting, voice-overs, and most especially, standup comedy, into a hilarious patchwork of
outraged and outrageous common sense. His abiding motto is, “You can’t make stuff up
like this.” The New York Times calls him “possibly the best political comic in the country.” Fox News agrees “he’s a great political satirist,” while the Oregonian hails him
as a “hilarious stand-up journalist.

Brett Wilkins is a San Francisco-based writer and activist whose work focuses on issues of war and peace, and human rights. He is a staff writer at Common Dreams.

José : is the Community Manager at the Electronic Frontier Foundation. In 1990, he experienced the United States’ wars from the other side, visiting his family in Panamá while it was under US occupation. He has organized against war and militarism ever since. At EFF, he has worked on teams focused on police, carceral, and border technologies.

Chloe McGovern has performed at major clubs throughout New York City and the country, including The Comedy Cellar Underground, The Stand, Gotham Comedy Club, Caroline’s, The Hollywood Improv, and The Laugh Factory, among others.

Annette Mullaney is a San Francisco-based comic who’s performed all over the country, from SF Sketchfest and Austin’s Out of Bounds Festival to the Detroit Women
of Comedy Festival, and recently opened for Third Eye Blind. Originally from Michigan, she lived in Syria for several years and has been a software engineer, translator, and writer for a magazine that she now realizes was a money-laundering front for the cousin
of a dictator.


Mike Rufo’s songs and poems arc across the waves of life. His music is gripping and eclectic, reflecting his impassioned engagement with the world. Mike’s musical
language builds upon powerful lyrics, soaring vocals, driving rhythms, and melodic riffs that explore emotional depths and transformation. He also mixes things up with a knack for well-conceived parody, with a dash of political punch, like his popular singles Hit the Road, Trump! and Spyin’ Eyes

Amazon Apple Facebook Google Microsoft

Harmony for Humanity: Uniting for Peace on Hiroshima Day

Harmony for Humanity: Uniting for Peace on Hiroshima Day

This year’s theme for No Ethics in Big Tech’s—formerly Ethics In Tech—Hiroshima Day annual commemoration and panel is “Harmony for Humanity: Uniting for Peace on Hiroshima Day.”

Our lineup this year includes a diverse mix of speakers and comedians, promising a unique blend of comedy and discussion that makes No Ethics In Big Tech events a
one-of-a-kind experience.

Legendary political comedian Will Durst is on the mend and back in our all-star lineup, along with stand-up superstar, journalist, and activist Francesca Fiorentini,
the ever-uproarious Chloe McGovern, and cosmopolitan humorist Annette Mullaney, whose work in Big Tech seasons her always witty sets.Singer/songwriter/guitarist Michael Rufo—whose crafty lyrics skewer government
surveillance and current affairs—is our special musical guest.

On the speaker side, host and No Ethics in Big Tech founder Vahid Razavi has assembled one of our best panels yet. Author and Veterans for Peace activist Helen Jaccard will lead off a lineup that includes Rev. Dr. Dorsey Blake of the Church for the Fellowship of All Peoples, Common Dreams writer Brett Wilkins, RootsAction founder Norman Solomon—whose latest book, War Made Invisible, is a must-read for all peace-lovers, José of the venerable Electronic Frontier Foundation, and our very special guest, atomic bomb survivor and Hiroshima: Bridge to Forgiveness author Takashi Thomas Tanemori.

“I am looking forward to meeting many of you,” says Tanemori. “It is wonderful that you are promoting peace. For me, I’m promoting peace through forgiveness.”
Always one of our more popular events, this year’s Hiroshima Day commemoration is all the more timely given the release of Christopher Nolan’s summer blockbuster Oppenheimer, a biopic chronicling the life of theoretical physicist and “father of the atomic bomb” J. Robert Oppenheimer. Being this is a No Ethics in Big Tech event, we won’t pull any punches like Hollywood inevitably does—although we stand in solidarity with striking writers and actors. As always, we’ll have a frank discussion about the history, current state, and future of nuclear weapons, which, arguably, now as much as ever represent an existential threat to humanity.

The world’s nine nuclear-armed countries—the United States, Russia, China, France, Britain, Israel, India, Pakistan, and North Korea—spent a combined $83
billion on their nuclear arsenals last year, with more than half of that amount attributable to the U.S. Instead of disarmament, the United States is spending tens
of billions of dollars modernizing and upgrading its nuclear arsenal. Another world is possible. The International Campaign to Abolish Nuclear Weapons (ICAN) was awarded the Nobel Peace Prize in 2017 for its work culminating in the landmark Treaty for the Prohibition of Nuclear Weapons, which now has 92 signatories and 68 state parties. However, none of the world’s nine nuclear powers have signed the treaty. But as United Nations Secretary-General António Guterres recently asserted, eliminating nuclear weapons is “not only possible, it is necessary.”

Guterres warning came amid heightened nuclear fears during Russia’s invasion of Ukraine and nuclear threats from Russian leaders. The world hasn’t been this close
to nuclear war since the 1980s, when tensions between the United States and Soviet Union reignited to levels unseen since the Cuban Missile Crisis. According
to the Bulletin of Atomic Scientists’ Doomsday Clock, we are closer to “midnight”—nuclear armageddon—than even during the dark days of the Cold War’s final decade.
It doesn’t have to be this way. Join us on Hiroshima Day, August 6, and let’s realize a better world together

Amazon Apple Facebook Google Microsoft

Oppenheimer’s Movie, Humanizing the Inhumane

Let me share my thoughts on the movie Oppenheimer, which I attended on its opening night in Palo Alto at the 7:00 PM showing. As a disabled individual, I was assigned seat E2 and had to sit in the handicapped seat located on top of the theater, as using the stairs was not an option for me.

When I purchased my ticket online from the Emerson St theater  in Palo Alto, their website indicated that the movie was nearly sold out. However, upon arrival, I was surprised to see that the occupancy was only around 75% at best. This experience reminded me of how certain organizations manipulate ticket sales, such as the Church of Scientology’s bulk purchases of Tom Cruise movies or movie production houses giving away tickets for free reviews to create a buzz.

In my opinion, Oppenheimer glorifies individuals responsible for the creation of weapons of mass destruction. The film attempts to humanize their actions, which resulted in events such as Trinity, Hiroshima, Nagasaki, and over 1000 nuclear tests in the US alone. Unfortunately, the movie fails to reflect on the impact these actions had on the victims of atomic blasts, the victims in Japan, the victims of nuclear energy accidents, or even the 1000 tests themselves. Ultimately, it is three hours of disjointed theater that overlooks the true consequences.

It is disheartening to think that our society, with over $800 billion spent on the military, has cultivated a generation fascinated by violence and technology, supporting movies like this. I have no doubt that it will receive awards, accolades, and generate significant streaming income. It seems both Hollywood and the tech industry excel at profiting from the war machine, essentially making blood money.

However, there is an alternative response to this movie. We are organizing an event to commemorate the anniversary of Hiroshima Day, which claimed the lives of 160,000 people. The event will feature a Hiroshima Survivor, as well as representatives from Roots Action, Common Dreams, Veterans For Peace, The Church of Fellowship of All People, and four talented comedians and musicians. Rather than glorifying individuals like Oppenheimer, our event aims to initiate meaningful discussions and even find humor in their inhuman actions. I invite you to join us for the Harmony for Humanity Hiroshima Day Event.

Please join us for the event on August 6th at 6:00 PM.

Thank you for taking the time to consider my review and invitation.


Vahid Razavi

Mr. Tanimori 


Google Screenwise: An Unwise Trade of All Your Privacy for Cash

With each passing day, it’s increasingly clear that we can’t rely on the “ethics” and “value systems” of corporations to judge their own messaging around consent Imagine this: an enormous tech company is tracking what you do on your phone, even when you’re not using any of its services, down to the specific images that you see. It’s also tracking all of your network traffic, because you’re installing one of its specially-designed routers. And even though some of that traffic is encrypted, it can still know what websites you visit, due to how DNS resolution works. Oh, it’s also recording audio from a custom-microphone that’s placed near your TV, and analyzing what it hears. It’s an always-on panopticon. In exchange for your privacy (and the privacy of any guests who may be using your Internet connection, or talking near your television), you receive a gift card for a whopping $20. No, we’re not talking about Facebook—we’ve already detailed the frightening consequences of Facebook’s sneaky, privacy-invading and security-breaking “user research” program. This is Google’s “ScreenWise Meter,” another “research program” that, much like Facebook’s, caused an upheaval this week when it was exposed. In order to spy on iOS users, Facebook took advantage of Apple’s enterprise application program to get around Apple’s strict app distribution rules. When news of this Facebook program hit earlier this week, Google scrambled to pull the plug on its own “user research” application, which was taking advantage of the same Apple program. Apple quickly revoked both organizations’ Enterprise Certificates, shutting down all of Facebook’s and Google’s internal iOS applications and tooling, leaving the two giants in disarray. We’re not a fan of Apple’s walled-garden approach to application distribution and its strict control over who gets to play on its platform and who doesn’t. However, this drama shined a valuable spotlight on deceptive messages to users and data harvesting practices surrounding two so-called “opt-in” “research” panopticons. Although Google pulled its iOS application, all the other parts of its Screenwise Meter surveillance program are still in operation—and in some cases, they collect even more data about their “research users” than the Facebook counterpart did.

Why This Google Shareholder Wants an End to Project Nimbus

Hundreds of Google’s own employees have spoken out against this controversial contract, which provides advanced technology that will be used to further oppress and harm millions of Palestinians

Today, for the first time in company history, Google shareholders like me voted on a resolution that calls into question a contract with the Israeli government and military known as Project Nimbus. Hundreds of Google's own employees have spoken out against this controversial contract, which provides advanced technology that will be used to further oppress and harm millions of Palestinians.

Through its Project Nimbus contract, Google provides cloud services to the Israeli army, making it easier for Israel to surveille and oppress Palestinians.

Shareholder resolutions have become an important tool for demanding ethical business practices in boardrooms. The resolution asks that Google and parent company Alphabet evaluate the harm caused by contracts with institutions that have violated human rights, like the Israeli military, US Immigration and Customs Enforcement (ICE), and US Customs and Border Protection (CBP). These contentious deals lead people to rightfully ask why a company that creates products we all know and rely on every day would also create and supply products to institutions that violate people's basic human rights.

For years, prominent human rights organizations have raised alarms about Israel's brutal oppression of Palestinians. There is now a near-total consensus among the United Nations and all major human rights organizations, including Amnesty International, Human Rights Watch, and Israel's leading human rights organization B'Tselem, that Israel's discriminatory policies and practices against Palestinians amounts to the serious crime of apartheid.

Through its Project Nimbus contract, Google provides cloud services to the Israeli army, making it easier for Israel to surveille and oppress Palestinians. The contract also provides data support to the Israel Land Authority (ILA), which according to Human Rights Watch, uses discriminatory policies to expand illegal Israeli settlements on Palestinian land where Palestinians aren't allowed to live.

Google has had incredible success recruiting a diverse workforce, and they should be embracing employees who want to make the company better. The company may have difficulties retaining talent if it pursues contracts that run contrary to Google's stated corporate values that attracted employees to the company in the first place. In May of last year, 250 Jewish employees at Google urged the company to support Palestinian rights and end its ties with the Israeli military, and nearly 700 Google employees signed a petition rejecting the Project Nimbus contract.

Google aspires to value democracy, accountability, and safety, and rightfully says that companies "can make money without doing evil." But as tech evolves and becomes more pervasive in almost every aspect of our lives, there is a growing awareness of the threat it can pose to human rights. Google has canceled these kinds of contracts before and it should reverse its contract with the Israeli military too. After public backlash over a contract with the US military, Google outlined new ethical principles for its use of artificial technology (AI), including a commitment not to use AI for weapons or surveillance, particularly in cases where there is a violation of "internationally accepted norms." Given the outcry from human rights organizations over Israel's system of apartheid, the Project Nimbus contract sharply conflicts with Google's stated ethical standards.

As shareholders, we believe that Google's commitment to ethics is a good thing for society and for investors. Not only do Google's AI principles help ensure that Google technology is used to bring people together, rather than cause harm, but these ethical commitments also make Google unique amongst its competitors in a world where more and more users want companies they support to share their values.

Ethical business matters. Increasingly, workers, consumers, and even investors are demanding better from corporations. Google has a choice: instead of enabling human rights abuses, it should promote technology that has a positive impact on the world. Google can and should stand on the right side of history by ending Project Nimbus.

Warren, Jayapal Demand Google Stop Trying to ‘Bully’ DOJ Antitrust Official

"Noting similar efforts by Amazon and Facebook, they said the tech giant "should focus on complying with antitrust law rather than attempting to rig the system with these unseemly tactics."
U.S. Sen. Elizabeth Warren and Rep. Pramila Jayapal on Wednesday sent a letter to Google CEO Sundar Pichai demanding that the company swiftly end its "ongoing attempts to strip Assistant Attorney General Jonathan Kanter of his authority to enforce antitrust law."

"These efforts to bully regulators and avoid accountability... are untethered to federal ethics law and regulations."

The day after Kanter took the oath of office to lead the Antitrust Division at the Department of Justice (DOJ) in November—following a bipartisan Senate vote confirming his nomination—Google suggested in the letter to federal officials that Kanter should be recused from litigation and probes against the tech giant because he may not be "fair and impartial."

Google's stance apparently has not changed. CNBC reported Wednesday that a spokesperson pointed to an earlier statement about his recusal, saying, "Mr. Kanter's past statements and work representing competitors who have advocated for the cases brought by the department raise serious concerns about his ability to be impartial."

Warren (D-Mass.)—who voted to confirm Kanter—and Jayapal (D-Wash.) pushed back against Google's claims, writing that "the company's attempts to force Mr. Kanter off current and future cases are misguided and reflect what appears to be a willful misunderstanding and misrepresentation of federal ethics mandates."

Echoing earlier arguments from experts, the pair laid out why Kanter's recusal isn't required under federal law:

First, there is no evidence whatsoever that Mr. Kanter's work involving Google at the DOJ would affect his "financial interest." Second, Mr. Kanter has never represented either Google or the United States, the two parties that would be involved in any "particular matter" involving action by the federal government against the company. Third, although Google as a corporation with a clear financial interest in weak antitrust enforcement appears to be willing to question Mr. Kanter's impartiality, there is no basis for a reasonable person to do so given that Mr. Kanter's prior work has aligned with the federal government's interest in robust enforcement of antitrust law.

Kanter, a well-known antitrust attorney whose nomination by President Joe Biden last year was welcomed as a win for workers and consumers, "is eminently qualified to lead the Department of Justice's Antitrust Division, and it is unfair and inappropriate of your company to question his impartiality," the progressive lawmakers wrote.

"These efforts to bully regulators and avoid accountability—which are similar to those of Facebook and Amazon earlier this year—are untethered to federal ethics law and regulations, and we urge you to cease them immediately," they added. "Google should focus on complying with antitrust law rather than attempting to rig the system with these unseemly tactics."

Jayapal and Warren, joined by Sens. Richard Blumenthal (D-Conn.) and Cory Booker (D-N.J.), similarly called out Amazon and Facebook last year for attempting to "strip Federal Trade Commission (FTC) Chair Lina Khan of her authority to enforce antitrust law."

Like with Kanter, Biden's nomination and the Senate's bipartisan confirmation of Khan, an "antitrust trailblazer," were widely celebrated by critics of Big Tech hopeful that the appointees will hold companies like Amazon, Facebook, and Google accountable for alleged illegal conduct.

"Google is right to fear that the company may have run afoul of federal antitrust law and that more aggressive enforcement from effective regulators could affect the company's operations and bottom line," wrote Warren and Jayapal, noting that U.S. officials "have filed a plethora of lawsuits against Google regarding alleged anti-competitive and exclusionary practices."

"If Google is serious about ending conflicts of interest in Washington, it can demonstrate its sincerity by supporting legislation, like the Anti-Corruption and Public Integrity Act, to strengthen federal ethics requirements," the Democrats said, referencing a bill they jointly reintroduced in 2020.

"Otherwise," they warned the Google CEO, "your efforts to sideline key federal regulators—like similar actions by Facebook and Amazon—simply serve as further evidence that you will go to all lengths to ward off necessary scrutiny of your immense market power."
Follow by Email