December 22, 2024

“Never let a good crisis go to waste.” The once-overheard quote uttered by one of Barack Obama’s advisors now represents the unjustifiable expansion and abuse of government powers of the Joe Biden era. With the pandemic arrived a litany of experiments that proved disastrous, from distance learning and expansion of the money supply to lockdowns. After all the damage and witnessing the lengths Biden and his fellows are willing to go to, anything seems to be permissible.

Whether the alternative leads us to a better or worse future is reserved for another day. Nonetheless, with real income declining because of reckless monetary policy, it’s no surprise that Biden is on the firing line. When totally reliable polls and fair, balanced outlets like NBC are suggesting that Donald Trump might be overtaking Biden, “democracy is in crisis, again.” At this stage, anything that doesn’t align with a Far Left domestic agenda and a hawkish foreign policy is “antidemocratic” and “xenophobic,” per the usual arguments.

But at a certain point, gaslighting simply stops working. It no longer sells as false dichotomies are no more than mere justifications for continuing more failed progressive experiments. At some point, they simply can’t tolerate counternarratives. “You must not go down the rabbit hole,” warned the New York Times (the same paper that didn’t return its Pulitzer Prize after lying about the Holodomor). They’re afraid that their narrative is losing traction. In 2016, it was told that social media and “disinformation” caused Hillary Clinton to lose. During the 2020 general election, the Federal Bureau of Investigation pressured Twitter and Facebook over the Hunter Biden story that many now acknowledge as authentic.

Despite being caught weaponizing federal government agencies and utilizing “third-party” organizations funded with taxpayer money to monitor, censor, and implement a double standard, Biden’s team simply carried on. Once again, per the Washington Post, Biden has been “working with TikTok creators to tell positive stories” about the Biden economy—hardly an unknown tactic.

But social media is hardly the only place where Biden and the Left seek to control the narrative. By this point, people have seen more than enough “expert” talking points about artificial intelligence (AI). The narrative often goes like this: tomorrow AI will be the force that destroys democracy unless it’s regulated properly. Just a few weeks ago, Biden signed an executive order regulating the development and utilization of AI to ensure trustworthiness and prevent “discriminatory” algorithms. The executive order was heralded as a “step forward” by the Leadership Conference on Civil and Human Rights, an umbrella network of left-wing advocacy organizations, including the Southern Poverty Law Center and the Anti-Defamation League.

The executive order delegates the power of implementing regulations to the National Institute of Standards and Technology and enforcement to the Department of Homeland Security. In its own words, “The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board.”

It follows a white paper published by the Biden administration roughly a month earlier. While the executive order was explicit about “preventing discrimination” by automated algorithms in housing and policing, it is eerily vague about the internet and online communications. The delegation of regulatory and enforcement powers is for applying them to “critical infrastructure sectors.” Therefore, AI models are regulated by which critical infrastructure sector they belong to. There are sections of the Department of Homeland Security dedicated to handling matters about the internet.

However, the white paper isn’t a dead end as far as clues for how a potential regulatory regime would look like. The paper, titled “A Blueprint for an AI Bill of Rights,” doesn’t limit itself to AI usage in healthcare, housing, finances, and criminal justice, even though most examples featured in the white paper and proposed regulations talk about AI in those specific areas. The talking points utilized for stringent regulation of AI utilization in the five aforementioned areas can translate beyond those areas, whether chatbots or social media algorithms, as the paper (and executive order) is part of the plan to tackle “inequity.”

In particular, two principles enshrined in the document can be interchangeably applied to AI usage in any sector. The first, as argued by Biden, “You should be protected from unsafe or ineffective systems” and consultation with stakeholders (i.e., “diverse communities” and “experts”). Designs should “proactively protect you from harms . . . unintended, yet foreseeable, uses or impacts of automated systems” and “inappropriate or irrelevant data use in the system.” Among the examples cited by “unintended, yet foreseeable” harm of automated systems is the allegation that counter quotes, criticism of racist quotes, and journalism by black people are unfairly throttled or moderated.

Remember, this occurs under the mantra of fighting “inequity.” Among the many expectations set by the white paper, data fed into a system “should be relevant, of high quality.” But which data is “high quality” depends entirely on how Biden and company define it. Also acknowledged in the white paper was the “National Science Foundation funds extensive research to help foster the development of automated systems that adhere to and advance their safety, security and effectiveness.”

The second point Biden advances is the prevention of algorithmic discrimination through “proactive equity assessments as part of the system design.” Biden alleges that “automated systems can produce inequitable outcomes and amplify existing inequity,” and “data that fails to account for systemic biases in American society can result in a range of consequences.” An example cited was the automated contextualization of social media comments where statements like “I’m a Christian” are more than likely to be shared, while “I’m gay” might be blocked.

If one is on the Right, one almost is certainly ready to laugh at ChatGPT’s (and others’) left-wing, pro-Democratic biases. At best, right-wingers would only utilize X’s GrokAI here and there. But as shown above, it’s not merely chatbots that we’re talking about. It isn’t difficult to conclude that an enforced “equitable” algorithm where progressive narratives (or “nonprivileged,” for that matter) are pushed, as with the aforementioned “I’m Christian” and “I’m gay” example Biden and company cited, means a framework for quelling counternarratives to the progressive orthodoxy on social media. X (formerly Twitter) might have broken away from progressive Big Tech, but soon X might forcibly rejoin the progressive digital ecosystem.

Even if the proposals only affect chatbots, doing nothing against a deliberately vague executive order and such a controlling regulatory regime isn’t the correct answer. As nonleftists complain about the lack of youth in their ranks, especially when their attention span is declining, chatbots are a gateway for quick information (or echo chambers) to turn them into progressives. Now multiply it with controlled algorithms on social media and search engines and see where it will take us.