Part 1: Turning of the Age
The following is the official summary from the dust cover of the book this final installment of my Puzzle of Consciousness series of pro bono posts and articles. I won't be writing any major projects like this one unless it's a book manuscript of my own. Regardless, this seems like as good a way to open this conclusion as any:
“As we approach a great turning point in history when technology is poised to redefine what it means to be human, The Fourth Age offers fascinating insight into AI, robotics, and their extraordinary implications for our species.
In The Fourth Age , Byron Reese makes the case that technology has reshaped humanity just three times in
- 100,000 years ago, we harnessed fire, which led to language.
- 10,000 years ago, we developed agriculture, which led to cities and warfare.
- 5,000 years ago, we invented the wheel and writing, which lead to the nation state.
We are now on the doorstep of a fourth change brought about by two AI and robotics. The Fourth Age provides extraordinary background information on how we got to this point, and how —rather than what —we should think about the topics we’ll soon all be machine consciousness, automation, employment, creative computers, radical life extension, artificial life, AI ethics, the future of warfare, superintelligence, and the implications of extreme prosperity.
By asking questions like “Are you a machine?” and “Could a computer feel anything?”, Reese leads you through a discussion along the cutting edge in robotics and AI, and, provides a framework by which we can all understand, discuss, and act on the issues of the Fourth Age, and how they’ll transform humanity.”
https://www.goodreads.com/en/book/show/35297413
Byron Reese is an (apparently) agnostic futurist whose published works examine many of the same concepts I've looked at in this series, and many of the same things I posted about in the Fight on the Fence Facebook group. I find his work to be admirable enough to use as a starting point for the title of this conclusion as well as its content. The following is a quote from a podcast interview he recently did with Lauren Hawker Zafer of the Squirro Academy, which will also help to set at least half the tone for this installment.
“I have so much confidence in the younger generation. I think that they really are wonderful. I mean, everybody I know today who's young has a cause or something that they believe in. And they're very socially conscious. And all of these things that when I was in high school in the ‘80s, I just don't remember. I mean, I think they're wonderful. Unfortunately, coming to a world which has peddled fright to them, and everybody else, not just them. But I kind of chalk it up to the 24-hour news cycle, you know. What in your water is killing you? Tune in after these messages to find out. And I think if you just get that relentless, things are going to be bad, bad, bad, bad, bad, bad, bad, bad, bad, over and over and over, you can't help but to internalize it. And I am an optimist, for reasons I'm happy to go through. I can tell you the only thing, the only thing that will keep an optimistic future from happening is if nobody believes in it. Because it's work, right? It's work. So, the minute that everyone is like, ‘I'm not going to.’
“When they built St. Peter's in Rome, instead of paintings they used mosaics, little tiles. And then they got all the artisans to do overage of the tiles that they put back. So, for the next 1,000 years, they would be able to repair them using the exact same tiles. So, the colors were the same and everything. And earlier this year, I had an opportunity to go back there, open the drawers, and see those handwritten notes from the 1500s of the artisans. And realize that it was the way that the Swedish government, a few hundred years ago, planted an entire island of Oak trees so that in 200 years when they were mature, would be able to build the kinds of ships that they would need. See they didn't get exactly what was going to happen in 200 years. But they were thinking about it.
I was just thinking about when they put it in the sewer system in London, which was a big ordeal. I mean, it was a big thing. And they figured, the guy who was in charge, what was the biggest, could possibly need to be ever? And they told him, I don't remember, 36 inches across so to say a meter. And then he said, you know what we are only going to get one chance ever to do this, so make it twice that size. And they did. And then, the future where people are flushing their nappies and all that down, they can deal with it because they believed in the future. And the minute you lose that, the minute that you are like, it doesn't matter. Why polish brass on a sinking ship, right? Why bother? That's the only thing we have to be careful of, I think.”
https://squirro.com/squirro-podcast/podcast/the-fourth-age
Byron is, of course, far from the first to use the term, “Fourth Age.” Tolkien's Legendarium immediately springs to mind:
“The Fourth Age was the time period that followed the War of the Ring (immediately) and the Third Age. It was a time of peace, and was also known as the "Age of Men".
J.R.R. Tolkien said that he thought the distance between the end of the Third Age and the 20th century A.D. was about 6000 years, and that 1958 should have been around the end of the Fifth Age if the Fourth and Fifth Ages were about the same length as the Second and Third Ages. He said, however, in a letter written in 1958 that he believed the Ages had quickened and that it was about the end of the Sixth Age or beginning of the Seventh.”
https://lotr.fandom.com/wiki/Fourth_Age#google_vignette
As usual, Tolkien's status as the father of modern fantasy speaks for itself. He wasn't the first great fantasy writer of modern times (I would include George MacDonald and L. Frank Baum, both of whom lived well before him, among the genre's founding fathers), but he introduced the subgenre of *high* fantasy, which comes with a number of specific characteristics. This variant of fantasy usually involves multiple POV (point-of-view) characters in order to convey sweeping events that spell the fate of an entire world. The Legendarium also introduced the practice of writing detailed historical timelines for fictional realms among both fantasy and science fiction writing.
Tolkien's Arda and George R. R. Martin's “Planetos” (a name affectionately bestowed on the World of Ice and Fire by his fandom) are two prominent examples of fiction franchises that sport well-detailed calendars. “Randland” is another unofficial, fan-bestowed moniker for a world which is brought to life but never actually named. Randland also has a well-defined history.
“The Fourth Age of the Wheel of Time is the age prophesied to follow the Third Age and the Last Battle against the Dark One. It is an age that lies in the near future from the perspective of the people of the world; alternately it is a very distant age long past.
Several excerpts of the histories of the Fourth Age exist. According to these fragments, cities such as Great Aravalon, a possible incarnation of Tar Valon, and Taralan, will exist in the Fourth Age. A place known as the Court of the Sun will also exist.”
https://wot.fandom.com/wiki/Fourth_Age
The world of “Randland” and the titular character of Rand Al’thor are products of the mind of the late Robert Jordan, who succumbed to cancer before he could complete his epic work. Fortunately, his friend Brandon Sanderson, who has created a fictional universe of his own in the form of the Mistborn franchise, was close enough to him to understand his vision well enough to bring it to completion. Tolkien’s work has become an archetype and a standard against which hundreds if not thousands of high fantasy writers have both been inspired by and (hopefully to a much lesser degree) measured the substance of their own work against.
So regardless of Tolkien's actual intent, the element that has stuck in the heads of many of his fans is that his Fourth Age was supposed to be the Age of Man, which means that with the departure of the elves for the Undying Lands (another dimension) and the gradual fading of dwarves, hobbits, goblins, trolls, and orcs from the world, humans inherited the earth, for better or worse. His posited Fifth, Sixth, and Seventh Ages have all but faded from memory, especially since he only specified that in his notes and not in any of his stories.
Middle-earth is distinct from entirely imaginary worlds like Randland and Planetos in the respect that Middle-earth was actually meant to be a fictionalized version of prehistoric Europe; a version which was considered ridiculous and impossible until very recently years, when new findings have revealed that not only did early humans exist with multiple species of protohumans, but that our ancestors may have been building megalithic structures the like of those described in his books as early as thirty thousand years ago, at least in certain parts of the world. The Rings of Power has had a rather polarizing effect on the fanbase, but hopefully a forthcoming film called The War of the Rohirrim will be worth showing up to theaters for.
Even fully imagined universes that are meant to be entirely distinct from our own are inevitably based on real history in one way or another. I haven't watched Amazon's Wheel of Time live action adaptation or read the books yet, so I'll leave off commenting on Randland… but Planetos is a perfect case in point, and the only one I need for my purposes before pivoting back to reality.
According to George R. R. Martin, author of the A Song of Ice And Fire saga which the TV show(s) are based on, has outlined the real world influences that shaped his work clearly. Westeros, the continent on which most of the action takes place, is a massively blown up version of Britain in geographic terms, and roughly reflects medieval Europe in the cultures of the Seven Kingdoms it's divided into. Essos, the massive continent to the east, clearly reflects ancient Asia in many ways. Sothoryos is a mostly unexplored Africa analogue to the south of Westeros, and even less is known of Ulthos in the far southeast, which vaguely resembles pre-colonial Australia.
I could go much deeper than this. Tolkien, for example, devised entire fictional languages based on the pre-Roman/Christian languages of western Europe, and found creative ways to insert bits of dialogue from these languages (the most prominent being Sindarin, an elven language) into the stories here and there without detracting from his narrative flow. Generations of fantasy and science fiction authors, even those who looked down on his beliefs and how he expressed them through his work, have studied his techniques in order to emulate and build on them in theirs.
Here's another brief quote from Byron Reese's AI philosophy book to put us back on track:
“But our versatility borders on the infinite. The greatest underutilized resource in all of the world is human potential.”
As I explained at length with the help of a lot of quotes from experts in the last one, both spoken and written language have unleashed human potential in a way that would never have been possible for our species otherwise, a way which none of our predecessors apparently ever achieved. That process has sped up exponentially in the Modern Era, in which literacy has spread across the globe in a way that was never possible in previous ages. The proliferation of technology and the knowledge to use it has led to the creation of a global civilization and culture for the first time ever (to our knowledge, of course). In turn, this has allowed more people than ever to take part in global discourse, and to scrutinize and critique the world around them, shaping it for future generations in some miniscule (or not so miniscule) way or other, through fiction and nonfiction alike.
Reese's outlook on the future of AI technology is completely optimistic, and he even addresses this in the Squirro Academy podcast interview I cited earlier.
“I don't think I'm reflexively an optimist. I don't think that's my nature. I'm not just like ‘pollyannish’ about everything. ‘Oh, it'll all work out’. Like that's, not what I think. And so, every day, I put my optimism on trial. Every day I ask if it is warranted. And all I try to do is say, I only know three things. Over the course of the last 10,000 years things have gone pretty well for us as a species. We are up and to the right in my parlance. That's the first one. And the second one is that technology is going to increasingly amplify what we're able to do and that's good. And the third one is that people are good, people are good. Most people are good. First of all, I think that's self-evident. Because if everybody was out for themselves, we never would have gotten to this part.”
So while I'm not at all accusing him of being a “pollyannist” or anything, it's clear that he affirms his faith in humanity even as he downplays it. Which is not at all a bad thing in a futurist. I just wish I could share in it.
This next quote explains a much different subgenre, one that transcends literary genres like fantasy, science fiction, slice of life, and even horror: the subgenre of dystopian fiction.
“The term “dystopia” is a direct descendant of the word “utopia,” which was coined by Sir Thomas More in his 1516 work of the same name. Utopia envisioned an ideal society, where harmony, justice, and prosperity reigned. However, it is within the context of this utopian vision that the seeds of dystopian thought were first sown.
As More described the perfect society, he simultaneously critiqued the flaws and excesses of his own time. This paradoxical juxtaposition of the ideal and the flawed would become a hallmark of dystopian fiction. More’s work set the stage for the emergence of dystopian literature by prompting writers to question the very foundations of their societies.
Jonathan Swift, in his 1726 masterpiece Gulliver’s Travels, expanded upon More’s satirical approach to societal critique. In the book’s fourth voyage to the land of the Houyhnhnms, Swift introduced a society of rational horses and irrational humans, painting a darkly satirical picture of human folly. This satirical strand within dystopian fiction highlighted the genre’s capacity to scrutinize the human condition and the absurdities of society. Swift’s work demonstrated that dystopian narratives could serve as powerful vehicles for social commentary, a theme that would persist throughout the genre’s history.
The 19th century ushered in an era of rapid industrialization and urbanization, and the anxieties and uncertainties accompanying these transformations found their way into literature. One of the earliest modern dystopian works, 1872’s Erewhon by Samuel Butler, explored the potential dangers of unchecked technological progress and societal conformity.
But it was in the 20th century that dystopian fiction truly came into its own. The devastating events of World Wars, totalitarian regimes, and the threat of nuclear annihilation provided fertile ground for dystopian narratives. George Orwell’s 1984, released in 1949 and 1932’s Aldous Huxley’s Brave New World, remain iconic examples of this period, offering chilling visions of oppressive surveillance states and dehumanizing technological societies. The Cold War era, with its geopolitical tensions and nuclear brinkmanship, fueled further exploration of dystopian themes. Writers like Philip K. Dick, in Do Androids Dream of Electric Sheep?, pondered the blurred lines between humans and machines, while Kurt Vonnegut, in Player Piano, examined the dehumanizing effects of automation.
As the 20th century gave way to the 21st, dystopian fiction continued to evolve. Authors increasingly turned their attention to environmental concerns, political polarization, and the ethical dilemmas of emerging technologies. Works like Margaret Atwood’s The Handmaid’s Tale and Cormac McCarthy’s The Road grappled with issues of reproductive rights and post-apocalyptic survival, respectively.
Fast forward to the present day, and the dystopian genre has evolved to reflect our contemporary fears and hopes. Climate change, overpopulation, artificial intelligence, and genetic engineering are among the key themes explored in modern dystopian fiction.
Authors grapple with the implications of these issues and craft narratives that compel readers to consider the potential outcomes if we fail to address them.”
That article from ScreenRant's Lee Glazerbrook goes on to list the many subgenres of dystopian fiction, which I'll briefly list the names of here with corresponding real world events, predictions, and/or proposals for the near future.
Global Warming:
Overpopulation:
https://www.theworldcounts.com/populations/world/10-billion-people
Artificial Intelligence:
Genetic Engineering:
https://www.defenseone.com/ideas/2019/08/chinas-military-pursuing-biotech/159167/
Technology and Loss of Humanity:
Glazerbrook also details the psychosocial purposes of dystopian fiction, which includes challenging the status quo, environmental awareness, and exploring issues of social justice and equality, all of which is achieved by portraying worlds plagued by the consequences of various forms of tyranny. He also gives a few examples of comparative analysis between such fixtures of dystopian fiction as George Orwell’s 1984 and Aldous Huxley’s Brave New World, highlighting contrasts and parallels between the grim landscape of the world ruled by the Party and Big Brother (introducing that concept into American culture) and the personal rebellion of Huxley’s protagonist against the superficial happiness and lack of personal autonomy imposed by a much different regime with the use of mandatory injections of mind-altering drugs.
These two parallel dystopian visions were born from the world of the Cold War, in which both the US and Soviet governments went a bit crazy at times, both introducing and aborting some rather insane black budget projects, including a human-ape hybrid program by the Soviets, Project Monarch and Project Montauk on the US side, and pursuit of psionic soldier programs on both sides. Not to mention the continuous development of nuclear weapons, which is the part most people know about. Obscurantism, or military intelligence doctrines which maintain a state of greater insurability into secret government programs rather than less, has not disappeared since the fall of the USSR in either its former member states or the western “Free World.” It's an ever-present reality, along with that of inevitable escalation and new arms races as the advance of technology continues. That one isn't the only example Glazerbrook gives, but it's the one I'm most familiar with. Read his original article for the others.
It isn't as if Byron Reese's proposed AI saviors of humankind even agree with him about the prognosis for our species, though I'm sure they can be trained to. He talks about interviewing a hundred artificial intelligences for the purposes of writing his book, and most of them fundamentally disagree with him about human nature, let alone how the cards will fall for the next thousand years of human history… if we survive them.
As Glazerbrook sums it up in his conclusion:
“Dystopian literature has long served as a mirror reflecting society’s deepest fears, provoking critical thought and igniting discussions about the potential consequences of unchecked power and societal control.
Several major works in this genre, upon their publication, not only captured the imagination of readers but also left an indelible mark on society, influencing cultural conversations and leaving a lasting legacy.”
While the term dystopia is a product of the 8smodern era, not coined by but because of Thomas More, a fiction writer of Shakespeare's time, the notion of the dystopia has been around for untold ages. Greek playwrights and actors of antiquity played no small role in shaping the belief system now known as Helenism through live action dramas at the Theotron (theater) of Delos. Festivals were often held in or near Athens to commemorate the feast days which had been first instituted in the old Attic Empire, a proto-Greek civilization the name of which has a homonym in modern English.
Educational website ScienceStyled uses an unorthodox but effective method of getting people more into science: couching scientific truths inside storytelling.
““Euthyphrotes,” I began, “picture a world where Homo sapiens were not the sole players upon the stage of humanity. Alongside them, Neanderthals and Denisovans, as distinct as Spartans from Athenians, yet part of the same lineage.”
“Neanderthals and Denisovans?” Euthyphrotes echoed, his interest visibly piqued. “Were they like us?”
“In many ways, yes,” I replied. “But also as different as Ares from Apollo. The Neanderthals, robust and rugged, roamed the lands now known as Europe and Western Asia. The Denisovans, more mysterious, a shadowy figure in our ancestral records, left their mark in the East.”
“And we lived alongside them, Socrates?” Euthyphrotes asked, his eyes wide with curiosity.
“Indeed, we did!” I affirmed. “Much like the complex relationships among the city-states of Greece, our ancestors interacted with these relatives. There were exchanges of culture and, as recent findings suggest, even of genes.”
“Exchanges of genes?” Euthyphrotes inquired, his brow furrowing.
“Ah, yes, the intricacies of prehistoric family reunions!” I chuckled. “You see, recent studies, with the cunning use of ancient DNA, have revealed that modern humans, Neanderthals, and Denisovans shared more than just the land. They interbred, leaving a legacy that persists in our DNA to this day.”
“So, part of us is… Neanderthal? Or Denisovan?” Euthyphrotes pondered aloud, clearly intrigued by this revelation.
“Exactly so!” I exclaimed. “Just as the Athenians might bear the influence of Spartans or Corinthians, modern humans carry with them a mosaic of genetic heritage from these ancient kin. Our very essence, a blend of diverse ancestral lineages.”
Euthyphrotes nodded thoughtfully. “It’s like a family gathering where distant cousins, long forgotten, suddenly appear at the feast.”
“An apt analogy!” I agreed. “These genetic encounters, much like a symposium, were opportunities for exchange and adaptation. 9They played a crucial role in shaping our resilience and diversity as a species.”
“And what of their disappearance, Socrates? The Neanderthals and Denisovans?” Euthyphrotes queried, a hint of melancholy in his voice.
“A tale most somber,” I conceded. “Their disappearance is shrouded in mystery, much like the fall of great civilizations. Climate change, competition, and perhaps even conflicts with our ancestors might have played a part. Yet, their legacy endures in us, evidence of our shared history through the aeons.” “
https://sciencestyled.com/tyrion-lannisters-guide-to-human-evolution-from-protozoa-to-power-plays/
I enjoy the fictional and/or pseudohistorical devices used by the writer/s at ScienceStyled, be they human or AI. The preceding quote from a fictional dialogue between Socrates and his student Eurythyphrotes won't be the last you see of ScienceStyled in this concluding essay. The following link is to an abridged summary.
https://vocal.media/fyi/chronicles-of-evolution-socrates-and-the-smartphone
It could easily be argued that this little dialogue is a shorter and more elegant wah of summarizing a lot of what I talked about in all four of the previous installments in this series (do our brains generate or merely house consciousness, what primarily makes us human, the origins of our species and the origins of our unique forms of communication), but especially the last two, Parts 3 and 4. I'll say that I went into much more detail than they have.
Part 2: Apparatus of a Technocracy
As I've shown, particularly in the most recent one, there are many elements of the human condition people simply take for granted. I'm not really talking about the five senses here - the qualia of the human experience was the subject of Part 1 - but the things we've produced as a species, something which underlies language, culture, mathematics, science, philosophy, writing, and even speech itself. That “something” is information, and our ability to process it, store it, express and/or transmit it, and create new things using it. Computer scientists sometimes refer to this as the human dataome (though there's a narrower, more technology-specific meaning to the term as well). Mark Abdollahian calls this the “symphony of human experience”, and might even dignify what I'm trying to do here as a form of network analysis.
https://youtu.be/fwjK-wqIWww?si=_5cuOrwhdiDtXpLv
Regardless, there are certain things that are, under normal circumstances, already in place every time a new child enters the world, and most of them can be boiled down to forms of information processing. Two easy examples are the following:
Dreams before birth prime the visual cortex for use in life.
“The team, led by professor Michael Crair, who specializes in neuroscience, ophthalmology, and visual science, wanted to understand why when mammals are born, they are already somewhat prepared to interact with the world.
“At eye opening, mammals are capable of pretty sophisticated behavior,” said Craig, “But how do the circuits form that allow us to perceive motion and navigate the world? It turns out we are born capable of many of these behaviors, at least in rudimentary form.” ”
https://bigthink.com/life/mammals-dream-before-birth/
The ability to recognize music and follow a beat is already present at birth.
“‘This crucial difference confirms that being able to hear the beat is innate and not simply the result of learned sound sequences,’ said co-author István Winkler, professor at the Institute of Cognitive Neuroscience and Psychology at TTK.
‘Our findings suggest that it is a specific skill of newborns and make clear how important baby and nursery rhymes are for the auditory development of young children. More insight into early perception is of great importance for learning more about infant cognition and the role that musical skills may play in early development.’
Honing adds: ‘Most people can easily pick up the beat in music and judge whether the music is getting faster or slower – it seems like an inconsequential skill. However, since perceiving regularity in music is what allows us to dance and make music together, it is not a trivial phenomenon. In fact, beat perception can be considered a fundamental human trait that must have played a crucial role in the evolution of our capacity for music.’ ”
The origin of these (normally) inherent properties of the human brain has been the subject of many a textbook and/or thesis paper over the origins of it. Theories as unorthodox as an ancient virus infecting and altering the brains of ancestral primates leading to the first division between biped primates like ourselves and arboreal (tree-dwelling) primates like the great apes which are alive today have been tossed around for decades, and as I have already demonstrated in Parts 1-4, none have been conclusively proven. In fact, as much as secularists criticize the history of organized religion, the fundamental basis of most secularist atrocities can be accurately described as the results of flawed theories about human nature. The brilliant but forever obscure William Sidis, a flame that flared unsustainably and burned out before its time, is a sad example of such results. His father succeeded in guiding a young genius towards the full realization of his potential, but managed to permanently clip his wings at the same time.
We live in an age of innovation like none that has come before. While more and more of that has come from computer science in recent years, human-powered flight is an ancient myth that has finally been made a reality in very recent times.
That experiment at Southampton University is a rare exception. Certain advances in agriculture would count as another, but we'll get to those in due time. But as I've talked about at length in previous posts, humanity's collective capacity to store and harness electrical energy continues to grow. Tremendous strides are being made around the world, and just as they are in other fields, the Japanese are at the forefront. They've made history and proved certain predictions based on the standard model to be correct, by creating plasma as a byproduct.
Another new advance which builds upon science I've talked about in previous installments is the first demonstration of energy teleportation. This pushes the perceived boundaries of this technology's potential applications by demonstrating that energy, not just information about quantum states, can travel from one point in this universe to another without passing through the space between. That is a profound revelation.
Thorium molten salt reactors are showing promise too, especially for large, long-distance vehicles. Such would probably be the ideal for sending a colony ship to another world.
Another, more earthbound concept is that of literally harnessing the motions of the ocean to power our civilization.
Perhaps most practical is the ambitious subterranean construction project underway on Olkiluoto Island, off the southwest coast of Finland. Upon its completion in 2025, the Onkalo complex will begin service as the world's first permanent storage facility for spent nuclear rods. The interior will be entirely automated, and as such is a place “where from 2025, no human should ever set foot for ten thousand years.
I would say the Russian YouTuber Alex Burkan's invention of a truly functional lightsaber with a fully functioning, retractable blade is one of the most impressive individual inventions of recent history. By fully functioning, I mean it can generate a plasma blade almost a meter long which can maintain a temperature of over 5,000° F, and can cut through steel. The electrolyser mechanism is only able to sustain the reaction that makes this possible for thirty seconds, but this is still the closest anyone has come to a truly viable energy sword. Humanity may get them yet.
AI has crossed a mathematical milestone in the form of the Noctua 2 supercomputer’s discovery of the ninth dedekind number.
“Grasping the concept of a Dedekind number is difficult for non-mathematicians, let alone working it out. In fact, the calculations involved are so complex and involve such huge numbers, it wasn't certain that D(9) would ever be discovered.
"For 32 years, the calculation of D(9) was an open challenge, and it was questionable whether it would ever be possible to calculate this number at all," said computer scientist Lennart Van Hirtum, from the University of Paderborn in Germany back in June, when the number was announced.
At the center of a Dedekind number are Boolean functions, or a kind of logic that selects an output from inputs made up of just two states, such as a true and a false, or a 0 and a 1. Monotone Boolean functions are those that restrict the logic in such a way that swapping a 0 for a 1 in an input only causes the output to change from a 0 to a 1, and not from a 1 to a 0. The researchers describe it using red and white colors rather than 1s and 0s, but the idea is the same.
"Basically, you can think of a monotone Boolean function in two, three, and infinite dimensions as a game with an n-dimensional cube," said Van Hirtum. "You balance the cube on one corner and then color each of the remaining corners either white or red. There is only one rule: you must never place a white corner above a red one. This creates a kind of vertical red-white intersection. The object of the game is to count how many different cuts there are."
The first few are pretty straightforward. Mathematicians count D(1) as just 2, then 3, 6, 20, 168 …, and so on. Back in 1991, it took a Cray-2 supercomputer (one of the most powerful supercomputers at the time) and mathematician Doug Wiedemann 200 hours to figure out D(8).
D(9) ended up being almost twice the length of D(8), and required a special kind of supercomputer: one that uses specialized units called Field Programmable Gate Arrays (FPGAs) that can crunch through multiple calculations in parallel. That led the team to the Noctua 2 supercomputer at the University of Paderborn.
"Solving hard combinatorial problems with FPGAs is a promising field of application and Noctua 2 is one of the few supercomputers worldwide with which the experiment is feasible at all," says computer scientist Christian Plessl, the head of the Paderborn Center for Parallel Computing (PC2) where Noctua 2 is kept.”
Even Noctua 2 is not quite a quantum computer, but its feat has presented researchers a teasing taste of what true quantum computers are capable of.
“Advances in quantum computing are bringing us closer to a world where new types of computers may solve problems in minutes that would take today's supercomputers millions of years.
Today's transistor-based computers have their limitations, but quantum computers could give us answers to problems in physics, chemistry, engineering and medicine that currently seem impossible.
"There are many, many problems that are so complex that we can make that statement that, actually, classical computers will never be able to solve that problem, not now, not 100 years from now, not 1,000 years from now," IBM Director of Research Dario Gil said. "You actually require a different way to represent information and process information. That's what quantum gives you." ”
As before, the Japanese are at the forefront of AI research. Their preeminent machine learning company, DeAnoS, have already been instrumental in detecting and correcting possible fatal flaws before they reach critical status in the aforementioned Japanese nuclear reactor.
The prospect of ever more efficient and complete automation has already caused a few identity crises… enough for futurists to know that this is an issue that will need to be proactively addressed in the future. One of the first steps to doing so will be to redefine the concept of productivity from the ground up.
“In March, Ethan Mollick, an associate professor at The Wharton School of the University of Pennsylvania, conducted an experiment. Mollick, who teaches about innovation and entrepreneurship, wondered how much of a business project he could accomplish in half an hour. The goal: use artificial intelligence tools to generate promotional material for the launch of a new educational game. AI would generate all the assets; Mollick would be the guide.
He set a timer and got to work.
Bing, Microsoft’s GPT-4 model, gave Mollick information on his product and what his market looked like. It also generated an email and social media campaign for the launch and outlined a webpage to help promote the game. Mollick opened up a separate tab, and GPT-4 helped him generate the HTML to build the actual webpage. MidJourney, an AI program that produces images from written prompts, gave Mollick a large, attention-grabbing visual to welcome visitors to the page. Finally, he generated a promotional video. Bing, once again, produced a script. Eleven Labs, a program that helps develop natural-sounding speech synthesis, generated a realistic voice. D-id, a program that generates photorealistic videos, took all the components and made it into a video.
Time was up. Mollick also had enough content to go live.
At its core, productivity is somewhat of a banal mathematical equation that compares a worker’s output over some unit of time. The more items a worker can complete during, say, an hour, the more productive they are considered to be.
By this definition, AI absolutely made Mollick more productive. After all, how many companies can build out a promotional campaign in thirty minutes by sheer human and brainpower? How many hours’ worth of meetings did these AI technologies save? How many jobs of knowledge workers did it complete? These questions around knowledge worker productivity are now top of mind for economists, companies, researchers, and managers who are thinking about the impact of AI.
If productivity is equivalent to economic growth, it seems indisputable that certain AI tools will increase productivity. A recent report by the global consulting firm McKinsey predicted that AI can infuse $4.4 trillion US dollars into the global economy every year. Banks would generate an additional $200 to $340 billion from AI aiding in customer service, decision making, and tracking fraud. Medical and pharmaceutical companies could see 25% increases in profit, the report estimated, if AI were to help develop new drugs and medical materials.
And studies by the Capgemini Research Institute, a think-tank that focuses on the impact of digital technologies, suggests that generative AI will also boost productivity in sectors like IT, sales, and marketing.
While the accessibility and prevalence of generative AI tools will overall boost productivity for companies, research has also found AI and automation tools are having an impact on how knowledge workers stay focused and get work done. In a Dropbox-sponsored study, Economist Impact found that, of those who report using automation tools—including AI—in their work, 79% are more productive, while nearly 70% say they’re more organized. Another report by McKinsey estimates that AI can free up 30% of US work hours for knowledge workers by helping code, answer emails, and automate other routine tasks.”
“AI is a human invention, so it can only augment our reputation for ingenuity and creativity: creating something more creative than ourselves, makes us more creative as a species.”
-Dr. Tomas Chamorro-Premuzic is a psychologist, author and Chief Innovation Officer at ManpowerGroup
Aside from normalizing creative theft, as David Burns from Data Universe does in that last article, the advent of AI has opened a lot of floodgates to new problems associated with ever-ascending levels of high technology. One of the most eminent and pressing of these concerns is, of course, security in a world of quantum cryptography.
The following article from an unnamed researcher from Frontier Research provides a guided tour of relevant terms like quantum parallelism, quantum simulation, quantum machine learning (of which there are two distinct types), and neuromorphic programming, as well as a thought-out, concise definition for quantum computing itself.
“Today, over 90% of Internet connections use a little encryption scheme called RSA. It protects your emails, passwords, accounts, and more. RSA is convenient to implement, versatile, and proven to be safe (so far).
But it has one fatal flaw.
RSA—and similar algorithms—are based on the math assumption that it’s really hard to factor large numbers (divide a number into its primes). For example, a classical computer would need 150,000 years to factor a 300-digit number.
With quantum computers, that assumption goes out the door.
Quantum computers can run new algorithms that are impossible for classical computers to run. One such algorithm, Shor’s algorithm, is especially designed to factor large numbers.
That same 300-digit number might take a quantum computer just a couple minutes to factor. A perfect quantum computer could break RSA in a trivial amount of time. Financial institutions would be at risk. Personal information would be leaked. Government secrets exposed.
But it’s not all doom and gloom. There’s good news.
Quantum-resistant encryption is already under development. These new algorithms will be secure against quantum computers and other leaps in computational power. Also known as “post-quantum cryptography,” this area is a massive commercial opportunity over the next decade.”
Most of those steelman examples hadn't yet even come to pass by the time Byron Reese published The Fourth Age in 2018… but I doubt he's been particularly surprised by any of them. His field of expertise is such that I'd expect he learns of most new advancements sooner than I can as a layman. I wonder if the note of high spirits and confident optimism evident in his work has changed at all in the five years since.
“With respect to robots and automation, the situation is the same. The experts couldn’t be further apart. Some say that all jobs will be lost to automation, or at the very least that we are about to enter a permanent Great Depression in which one part of the workforce will be unable to compete with robotic labor while the other part will live lavish lives of plenty with their high-tech futuristic jobs. Others roll their eyes at these concerns and point to automation’s long track record of raising workers’ productivity and wages, and speculate that a bigger problem will be a shortage of human laborers. While fistfights are uncommon between these groups, there is condescending invective aplenty.
Finally, when considering the question of whether computers will become conscious and therefore alive, the experts disagree yet again. Some believe that it’s an obvious fact that computers can be conscious, and thus any other position is just silly superstition. Others emphatically disagree, saying that computers and living creatures are two very different things and that idea of a “living machine” is a contradiction in terms.
To those who follow all this debate, the net result is confusion and frustration. Many throw their hands up and surrender to the cacophony of competing viewpoints and conclude that if the people at the forefront of these technologies cannot agree on what will happen, then what hope do the rest of us have? They begin to view the future with fear and trepidation, concluding that these overwhelming questions must be inherently unanswerable.
Is there a path out of this? I think so. It begins when we realize that these experts disagree not because they know different things, but because they believe different things.
For instance, those who predict we will make conscious computers haven’t come to that conclusion because they know something about consciousness that others don’t, but because they believe something very basic: that humans are fundamentally machines. If humans are machines, it stands to reason that we can eventually build a mechanical human. On the other hand, those who think that machines will never achieve consciousness often hold that view because they aren’t persuaded that humans are purely mechanical beings.
So that is what this book is about: deconstructing the core beliefs that undergird the various views on robots, jobs, AI, and consciousness. My goal is to be your guide through these thorny issues, dissecting all the assumptions that form the opinions that these experts so passionately and confidently avow.”
-Byron Reese, The Fourth Age
The field of technological innovation holds many things in common with arms races, and when considered in light of Thucydides’ Trap (another thing that will come up later), the corporate quest for AGI (Advanced General Intelligence) absolutely is an arms race. “Quantum supremacy” is a term tech execs use to describe, more or less, gaining the ability to process more information than anyone else in the world by being the first to master quantum computing. Currently, there are two distinct frontrunners in this particular race: the well-known IBM, and an obscure upstart start-up out of Colorado by the name of Atom Computing. Both claim to have developed working quantum computer prototypes.
Reese strongly believes that AI will lead to a new golden age of humanity, more revolutionary and transformative for our civilization than anything that has come before. There are multiple reasons to suspect he may be right, including the technological advances I've already listed as well as possible breakthroughs in anti-aging technology.
To understand how anti-aging drugs are supposed to work, it's necessary to talk about dinosaurs for a moment. Bear with me, and we'll get through this quickly. Much like human and/or cosmic origins, the current understanding of the dinosaur era is undergoing a few revisions. One of those is the cause of their demise. While traditional consensus has always credited the massive ancient asteroid impact that created the Chixculub crater in southern Mexico, but new evidence says the creatures were in peril well before the asteroid arrived.
“The scientists focused on the Deccan Traps in western India, an area in the subcontinent characterized by rocky plateaus made from solidified molten lava from ancient volcanic eruptions. By studying the lava piles and performing calculations, they were able to determine that volcanic activity soon before Chicxulub struck Earth was spitting out large amounts of sulfur — so much sulfur that it was deeply stressing the environment.
"Deccan Traps volcanism set the stage for a global biotic crisis, repeatedly deteriorating environmental conditions by forcing recurring short volcanic winters," the scientists wrote.
"This instability would have made life difficult for all plants and animals and set the stage for the dinosaur extinction event," said study co-author and McGill University professor Don Baker in a statement. "Thus our work helps explain this significant extinction event that led to the rise of mammals and the evolution of our species."
The idea that intense volcanic activity was a major cause behind the mass death of dinosaurs paints a more complex picture of the time period. Perhaps the asteroid was just the apocalyptic cherry on the sundae.”
So, aside from the fact that geologically-induced climate change seems to have played a major role in the demise of the dinosaurs, it's possible that the dinosaurs themselves had an effect on human aging. That is the nutshell version of Birmingham microbiologist João Pedro de Magalhães’ so-called ‘longevity bottleneck hypothesis.’ Observe:
“While noting that humans (along with elephants and whales) theoretically have the potential to live longer than most other mammals, de Magalhães said that every mammal is still living under genetic constraints dating back to the era of the dinosaurs.
“Evolving during the rule of the dinosaurs left a lasting legacy in mammals,” de Magalhães wrote. “For over 100 million years when dinosaurs were the dominant predators, mammals were generally small, nocturnal, and short-lived.”
The pressure to stay alive eliminated the genes needed for long life. Citing reptiles and other animals with a much slower biological aging process than mammals, de Magalhães hypothesizes that during the Mesozoic Era, mammals either lost or deactivated genes associated with long life.
“Some of the earliest mammals were forced to live toward the bottom of the food chain and have likely spent 100 million years during the age of the dinosaurs evolving to survive through rapid reproduction,” de Magalhães wrote in a statement. “That long period of evolutionary pressure has, I propose, an impact on the way that we humans age.”
Digging deeper into the research, de Magalhães came to believe that the loss of enzymes tied to the Mesozoic Era limits many mammals’ ability to repair damage. Examples include the loss of enzymes that restores skin singed by ultraviolet light and the fact that mammal teeth don’t continue growing throughout their lifetime like reptiles.
He says that the animal world offers remarkable repair and regeneration examples, but that some of that genetic information would have been “unnecessary for early mammals that were lucky to not end up as T. Rex food.”
Of course, de Magalhães knows this is all just a hypothesis, but it’s one he thinks could have some substantial explanatory power. “There are lots of intriguing angles to take this,” he said, “including the prospect that cancer is more frequent in mammals than other species due to the rapid aging process.”
If we really do have the dinosaurs to blame for our rapid aging, at least we got the last laugh.”
In turn, these breakthroughs have led to what may well prove to be a “brake pedal for the aging process”, according to the research team named here.
“The idea that chronic, low-level inflammation in the hypothalamus drives aging is not new. In 2013, a different group of researchers revealed that they could slow aging in mice — and increase their lifespan — by inhibiting certain inflammatory immune molecules in the hypothalamus.
Following their discovery, Dongsheng Cai and his colleagues at the Albert Einstein College of Medicine in New York speculated that suppressing inflammation in the hypothalamus could optimize lifespan and combat age-related disease.
Cai told Freethink that the new study identifying Menin as a key player in this process was “interesting and novel.”
“Menin is known for being anti-inflammatory,” said Cai, who was not involved in the new research, “and this study found its physiological significance in hypothalamic control of aging.”
However, Cai said the role of “hypothalamic microinflammation” in aging was subtle, complex and dynamic, so it remained unclear how best to target it in humans.
“Whether Menin could represent an applicable target remains to be investigated,” he said.
It’s worth noting that aging involves a buildup of “senescent cells” – cells that have stopped dividing and reproducing – and at the same time a breakdown in the body’s ability to clear them away. Tellingly, senescent cells churn out molecules that promote chronic inflammation.”
Neuromorphic machine learning is definitely the most theoretical of the three projected types of quantum-based artificial intelligence: quantum-imported machine learning, which basically refers to a pre-existing language model like the current ChatGPT or Bard AI being uploaded into a quantum computer, quantum-native machine learning, which is self-explanatory, and neuromorphic machine learning. Of the three, neuromorphic programming is the most theoretical by far at this stage, but also holds the most potential power for either ending or transforming the world as we know it. This is because the whole objective, as implied by the term ‘neuromorphic’, is to create artificial copies of human brains. This would mark a point when true “mind upload,” which is really just copying the sum of one’s mortal memories into digital format to create a new AI consciousness that identifies as you, becomes possible.
That field of research is still in its infancy. To create a virtual clone of a human being, computer scientists need neuroscientists to help them create a map or blueprint of the human connectome, which will bring a much deeper understanding of how each individual neural pathway in our brains functions.
They're not quite there yet.
“Neuropeptides and their receptors are among the hottest new targets for neuroactive drugs. For example, the diabetes and obesity drug Wegovy targets the receptor for the peptide GLP-1. But the way these drugs act in the brain at the network level is not well-understood.
The structure of neuropeptide networks suggests that they may process information in a different way to synaptic networks. Understanding how this works will not only help us understand how drugs work but also how our emotions and mental states are controlled.
The idea of mapping these wireless networks has been one of our goals for a long time, but only now have the right combination of people and resources come together to make this actually possible.”
https://www.ukri.org/news/first-wireless-map-of-worms-nervous-system-revealed/
They've started with very simple invertebrates. One would think this means they are decades away from applying this knowledge to the human brain… and the fact is that with human brains alone, it would probably take centuries even with equipment sensitive enough to take the proper measurements of each pathway. There are millions of them, and they must be mapped and understood in their entirety for a neuromorphic AI - one with a neural network that functions like a human brain - to be created.
However, they're further along than I initially believed when I set out to write all this down. As of just a few days ago, research has been published on ‘synaptic transistors’, which are a significant piece of the puzzle.
The famous (or rather, infamous) AI of today are little more than chatbots with advanced features, though some have already shown more than their fair share of “personality quirks.” There are several apt examples of this, but it has already happened enough for such incidents to already be normalized to some degree. Therefore, I'll just give the most recent and currently most notorious example: Elon Musk's own Grok AI has “accused” him of being a pedophile to a journalist who is out to bring him down. She asked leading questions, and Grok willingly played along as it was designed to do.
https://futurism.com/elon-musks-grok-ai-accusations
“Opinions on AI are divided, and it stirs up strong feelings. Some praise its remarkable progress and opportunities, while others warn about its risks. It's a bit like a classic love story – we're impressed by the convenience and power of AI, but we're also cautious about its potential drawbacks.
But in this situation, we shouldn't forget an important factor: our own psychology.”
So the foundations have been laid for a forthcoming world in which humans and AI coexist, for better or worse. Virtual “minds” can be housed in desktop computers, and efforts are underway to make it accessible through smaller and smaller devices. I've seen Google Assistant pop up on my phone once or twice, though I have yet to engage with it. It's certainly interesting to live through the era of history in which this technology makes its first mark on the world, and to see the first truly functional humanoid robots appear on the market.
Soon, everyone in the world who owns a phone will have an AI at their fingertips, harvesting their user data that much more efficiently.
“Currently, AI innovation is driven by advancing LLMs that are driving the need for an unprecedented amount of AI computing power and resources. For example, SoftBank is building the world’s first 5G AI data centres with NVIDIA’s Arm-based Grace Hopper Superchip for giant-scale generative AI workloads.
Such developments are designed to meet the insatiable demand for AI-based applications. ChatGPT was the fastest-growing technology rollout ever, hitting 100 million users in its first two months. For context, it took Instagram two and a half years to reach similar numbers.
But a continuous push for yet more performance and compute to support advancing AI is not sustainable in the long-term, especially with finite resources on our planet. High compute demands need to be matched by a commitment to power efficiency.
This is why efficient computing is a significant part of the AI story, particularly for Arm (NASDAQ: ARM) with a heritage grounded in pushing the boundaries of power efficiency across all technologies and devices.
This commitment to efficiency is enabling AI at the edge, so complex workloads, even LLMs, can take place on devices and not just in the cloud. In the worlds of the IoT and consumer technology, the edge is where AI needs to take place for quicker, more secure user experiences.
Within these devices, the central processing unit (CPU) is vital for AI, whether it is handling workloads in their entirety or in combination with a co-processor, like a graphics processing unit (GPU) or neural processing unit (NPU). All these technologies in the computing system are laying the foundation for future AI innovation and keeping up with AI’s move to the edge through a heightened focus on power efficiency alongside performance.
The first steps to running LLMs for generative AI on smartphones and other consumer devices are already being seen. This will deliver more advanced AI workloads beyond the common AI-based experiences in use day-to-day, like keyboard and object detection and speech recognition. Arm is central to AI across consumer technology, with 99 per cent of the world’s smartphones running on Arm and with 2.5 billion Arm-based devices with AI capabilities in use today.”
A global summit like none that has come before will be held in Geneva, Switzerland this coming year: AI For Good. I'd prefer to think all the attendees will be there for the purest, most altruistic and philanthropic of reasons. What I know is, time will tell.
https://aiforgood.itu.int/summit24/
Gary Marcus, an Emeritus Professor of Psychology and Neural Science at NYU, he is the author of five books, including, The Algebraic Mind, Kluge, The Birth of the Mind, and the New York Times Bestseller Guitar Zero, is a preeminent authority figure within these circles. His views on the ultimate effect of AI technology on humanity are, naturally, optimistic. He has money tied up in the industry too, after all. He's the chairman and founder of the AI For Good organization, as I understand it.
Yet at the same time, he considers AI far more dangerous than Reese does.
“Marcus argues that in order to counter all the potential harms and destruction, policymakers, governments, and regulators have to hit the brakes on AI development. Along with Elon Musk and dozens of other scientists, policy nerds, and just plain freaked-out observers, he signed the now-famous petition demanding a six-month pause in training new LLMs. But he admits that he doesn’t really think such a pause would make a difference and that he signed mostly to align himself with the community of AI critics. Instead of a training time-out, he’d prefer a pause in deploying new models or iterating current ones. This would presumably have to be forced on companies, since there’s fierce, almost existential, competition between Microsoft and Google, with Apple, Meta, Amazon, and uncounted startups wanting to get into the game.
Marcus has an idea for who might do the enforcing. He has lately been insistent that the world needs, immediately, “a global, neutral, nonprofit International Agency for AI,” which would be referred to with an acronym that sounds like a scream (Iaai!).
As he outlined in an op-ed he coauthored in the Economist, such a body might work like the International Atomic Energy Agency, which conducts audits and inspections to identify nascent nuclear programs. Presumably this agency would monitor algorithms to make sure they don’t include bias or promote misinformation or take over power grids while we aren’t looking. While it seems a stretch to imagine the United States, Europe, and China all working together on this, maybe the threat of an alien, if homegrown, intelligence overthrowing our species might lead them to act in the interests of Team Human. Hey, it worked with that other global threat, climate change! Uh …”
https://www.wired.com/story/plaintext-gary-marcus-ai-stupid-dangerous/
Wired Magazine's Steven Levy showed his levity at Professor Marcus’ proposed solution there, but I can hardly disagree with him. Meanwhile, Evolution News’ Casey Luskin gives an eloquent summary of the greatest, most eminent threat to the entire “AI ecosystem” save, perhaps, EMP's from the next solar maximum:
“The Discovery Institute’s recent COSM 2023 conference hosted a panel on “The Quintessential Limits and Possibilities of AI,” addressing one of the fundamental questions that COSM seeks to investigate: “Is artificial intelligence ‘generative’ or degenerative?” If these experts are right, AI might be doomed to eventually degenerate into nonsense.
George Montañez, Assistant Professor of Computer Science at Harvey Mudd College, opened the session by explaining how AI works. Modern AIs and their “large language models” (LLMs) are trained on huge sets of real-world data — namely text and images generated by humans.
Panelist William Dembski, a mathematician and philosopher, pointed out that these LLMs “require a lot of input data and training” in order to work. For example, he notes that it took an immense amount of data, time, money, and collateral damage to humans, to train AI to recognize and reject pornography. Similarly, software engineer Walter Myers noted on the panel that ChatGPT had to train on millions of images of cats and dogs before it could reliably recognize them. In contrast, Montañez points out that a human child can see a few pictures of an animal and they’re immediately able to recognize that species as life.
Montañez further explained that after enough training, AI can interpret data “beyond the things it’s seeing” — but this is only due to “biases and assumptions” provided by humans who program AI with these capabilities. This means that “human fingerprints are all over” the capabilities of AI, and “as impressive as these systems are,” they are “highly parasitic on human rationality and creativity.” Montañez gave the example of an AI that remixes rap with Shakespeare. You “might think it’s amazing” but the reality is “it’s all based upon human programming.”
But there’s a pitfall to training AI on large datasets — something Denyse O’Leary recently wrote about — called “model collapse.” In short, AI works because humans are real creative beings, and AIs are built using gigantic amounts of diverse and creative datasets made by humans on which they can train and start to think and reason like a human. Until now, this has been possible because human beings have created almost everything we see on the Internet. As AIs scour the entire Internet, they can trust that virtually everything they find was originally made by intelligent and creative beings (i.e., humans). Train AI on that stuff, and it begins to appear intelligent and creative (even if it really isn’t).
But what will happen as humans become more reliant on AI, and more and more of the Internet becomes populated with AI-generated material? If AI continues to train on whatever it finds on the Internet, but the web is increasingly an AI-generated landscape, then AI will end up training on itself. We know what happens when AIs train on themselves rather than the products of real intelligent humans — and it isn’t pretty. This is model collapse.”
This is where dystopia begins to creep into the edges of Byron Reese's grandiose vision. As explained above, we are entering an era where user data will represent more than just money (and as a result, will be worth more of that than ever before). Will the technological singularity end before it truly begins? User data is swiftly becoming a commodity on par with fossil fuels themselves in certain ways, because user data - the records and creations of individual human internet users - is quite literally food for AI.
It would seem that platforms like YouTube and TikTok will soon be transformed into the data farms of the new information era. Online interactions between users and every video posted by every creator on every streaming platform represents some sort of training data for new generations of language models.
Futurism's Maggie Harrison seems to see the wind blowing the same direction.
“As it stands, the most practical solution for this looming problem — save for the advent of mass human content farms, where we lowly carbon-based creatures click and clack away to feed the endless data thirst of our robot overlords — may actually be through data partnerships. Basically, a company or institution with a vast and sought-after trove of high-quality data strikes a deal with an AI company to cough up that data, likely in exchange for cash.
"Modern AI technology learns skills and aspects of our world — of people, our motivations, interactions, and the way we communicate — by making sense of the data on which it's trained," reads a recent blog post from leading Silicon Valley AI firm OpenAI, which launched a new Data Partnership just last week. "Data Partnerships are intended to enable more organizations to help steer the future of AI," the blog continues, "and benefit from models that are more useful to them, by including content they care about."
Considering that most of the AI datasets that are currently being used to train AI systems are made from internet-scraped data originally created by, well, all of us online, data partnerships may not be the worst way to go. But as data becomes increasingly valuable, it'll certainly be interesting to see how many AI companies can actually compete for datasets — let alone how many institutions, or even individuals, will be willing to cough their data over to AI vacuums in the first place.
But even then, there's no guarantee that the data wells won't ever run dry. As infinite as the internet seems, few things are actually endless.”
I've traced the outline of the future economy and the kind of status quo that economy is most likely to support. Assuming the “AI ecosystem,” which really consists of the language models themselves and the raw data from the human dataome that sustains them, can survive the only two things which can really threaten it at this point (solar EMP's and model collapse), it is no longer a question of if we will reach the threshold of a technological singularity. The only questions that remain are when it will happen, what shape it will take… and whether it will harm or benefit the average human being.
Continued rumblings from within OpenAI and other AI firms don't bode well for the “decel” movement. ‘Decels’ is the informal term given to the faction of computer scientists who are trying to use their collective influence to slow down AI development to a more deliberate pace. Sam Altman was reinstated soon after being ousted, and he believes in AI's ability to bring forth a “cosmic utopia” more strongly than almost anyone else I know of.
https://decrypt.co/207190/openai-ai-dangerous-breakthrough-led-to-sam-altman-ceo-removal
Part 3: Spirits of Cyberspace
Since the days of Immanuel Kant and the “new theodicy” presented in his work, there have been many, many theories of mind and models of the psyche. American transcendentalism (which is almost redundant to say, since the transcendental movement originated in and to this day mostly comes from the US), for example, is a spiritual philosophy based around a theory of mind. This is a mental model in which the main objective, purpose, or goal in life is said to be transcending the personal ego in order to become “one with the Force,” or as Henry Wadsworth Longfellow described it, the “Oversoul”. While Longfellow clearly meant to describe God or an aspect thereof with this term, the fact that it also describes the collective human experience and how concepts conveyed through literature allow people to connect and relate to the thoughts of others who lived in different regions, generations, or even previous eras of history made his understanding of the Creator a very “unorthodox” one in Nineteenth Century America. He was often accused of being a heretic and consorting with devils, but his philosophy transcended his lifetime and his personal influence as a philosopher. It's a tradition that's still being added to. One of the most recent and definitive voices to be added to this philosophical tradition was that of Ken Wilber, in the late Twentieth to early Twenty-first Century.
“Ken Wilber (2000) proposed a model of human consciousness called Integral Theory, which integrates various levels, lines, states, and types of development, based on a synthesis of Eastern and Western philosophical and psychological traditions.
According to Wilber, the mind is not a fixed or static entity, but a dynamic and evolving process that mediates between the individual and the environment, and between the lower and higher aspects of consciousness.
He suggested that the Ego/mind can be seen as a spectrum of development that ranges from pre-personal to personal to transpersonal stages, each with its own characteristics, challenges, and potentials. The mind can also be influenced by different domains of development, such as Cognitive, moral, emotional, interpersonal, and spiritual.”
That article from self-transcendence research goes on to describe the specifics of the Integral Model and gives a brief summary of some other prominent Transcendentalists who've contributed their own theories of mind since his time.
“According to Wilber (1981), Self-transcendence is the goal of human development and the highest expression of human potential. He suggested that Self-transcendence can be facilitated by a supportive environment that encourages growth, openness, Authenticity and Diversity.
Other recent theorists have also explored the concept of Self-transcendence from different perspectives. For example:
Frankl (1963) defined Self-transcendence as the ability to go beyond oneself and find Meaning in life. He claimed that Self-transcendence is a fundamental human motivation and a source of resilience in the face of suffering. He advocated for a form of psychotherapy called Logotherapy, which helps people discover their unique Purpose and values in life.
Maslow (1971) described Self-transcendence as the extension of oneself beyond the Self-actualization level in his hierarchy of needs. He claimed that Self-transcendence is a natural tendency of human nature and a manifestation of Peak experiences, which are moments of intense joy, wonder and ecstasy. He emphasized the importance of cultivating creativity, Spirituality and Altruism for achieving Self-transcendence.
Reed (2008) conceptualized Self-transcendence as a multidimensional construct that involves expanding one’s Boundaries of self, connecting with others and nature, and expressing one’s values and beliefs. She developed a theory and a measure of Self-transcendence for nursing practice, which aims to enhance wellbeing, coping and Quality of life for patients and caregivers.
Cloninger (2004) proposed a biopsychosocial model of personality that includes Self-transcendence as one of the four temperaments. He defined Self-transcendence as the tendency to identify with everything that exists and to experience a sense of Oneness, Harmony and spiritual Awareness. He argued that Self-transcendence is influenced by genetic factors, brain functions and environmental factors.
This seeming difference in views as to what Self-transcendence actually is may not be as significant as it first appears. If one conceives Self-transcendence as a spectrum, much as Wilbur proposed. Then it’s possible to see that these descriptions reflect the commentators’ focus on a specific part of that spectrum, and that within that view all these views may well be describing the same thing.”
There have been many philosophers, many psychologists (a field which started out as a philosophy based on behavioral science), and many mental models. As our next source demonstrates, mental models are useful tools for understanding reality, but they are also only as good as the thought processes of the mind/s which created them.
“Alfred Korzybski originally put forward the idea that “the map is not the territory” to outline the fact that people often confuse their beliefs of the world with reality, even when their beliefs are based on little to no evidence. This can be very dangerous, as public opinion influences decisions that are made by organizations and governments. Mental models are similar to opinions: everybody has them, but they are not all equally valid. Mental models and opinions are only as correct as the supporting evidence and reasoning that forms them.
The best mental models are the models that stand the test of time, such as Occam’s razor. Longstanding mental models typically endure because they are accurate enough to be useful to many people, while short lived mental models, including some models that never extend beyond one person, wither away because they are not useful and are potentially based on biases. However, even long lasting models need to be updated or replaced from time-to-time.
Consider the laws of physics as formulated by Isaac Newton. His framework for gravity has enabled astronomers to accurately predict the motion of planets and stars for centuries. His framework is still widely used today and is sufficiently accurate for most applications. However, in some situations, it falls short. During the 20th century, astronomers discovered that there were circumstances in which Newtonian Mechanics did not align with what astronomers were actually seeing. Albert Einstein’s “General Theory of Relativity” corrected these errors, but its mathematics involving gravity are much more complex.
These two mental models both seek to describe the same thing: gravity, but they have their own benefits and drawbacks. Newton’s equations for gravity are much simpler than Einstein’s, but in some situations, are not accurate enough and paint an incomplete picture. Einstein’s equations are significantly more complex and are not suitable or needed for most ever-day tasks, but are crucial in situations where Newton’s equations fail. There are many such scenarios where multiple models describe the same thing in different ways and are each suitable under different circumstances.”
This brings us to a key difference between how humans and AI process knowledge. Unless outfitted with cameras and sensors, an AI has no way of observing the physical environment around its housing. Their knowledge of the world comes exclusively through data, especially while they're being trained. While neuromorphic programming may be further along than I had surmised, teaching AI to interact with their environment through a sensory apparatus similar to human sensory organs is much further away.
"What today robots and AI don't know how to do is the same as what they can't do. The development of science and technology is certainly driven by the desire for knowledge but also by the objective of producing more. What is and will be available is what has potential for use. For example, a robot today is a rigid structure, with limited mobility and cannot adopt all the positions that are possible for natural beings. But this is not a theoretical limit: it would be enough to add more degrees of mobility and actuators, but who would care? Another example, linked to AI, is the ability to manage novelty. Today the deep learning systems are very effective at recognizing objects and people from images, learning from a large quantity of appropriately labelled images. What they assign to the new image is therefore a label from those known, or a non-recognition. However, even this is not an insurmountable theoretical limit. In many cases it would be sufficient to add a greater number of labels. More generally, similarity rules should be added to assign previously unseen objects to the most similar categories. The biggest difference between what humans learn and what AI systems learn is the role of sensory experience in producing knowledge. For humans, all knowledge is mediated by sensory stimuli, and is constantly evolving. Machines do not interact and do not learn from sensory experience, and therefore knowledge is introduced to them in the form of data and rules. So far, experiments to make robots acquire knowledge through continuous interaction with the environment are only just beginning."
-Giuseppina Gini, Associate Professor of Robotics at the Polytechnic of Milan
That segues us neatly into my next point, which Big Think Magazine's Seth Anil makes clear here:
“In Prague, in the late 16th century, Rabbi Judah Loew ben Bezalel took clay from the banks of the Vltava River and from this clay shaped a humanlike figure — a golem. This golem — which was called Josef, or Yoselle — was created to defend the rabbi’s people from anti‐Semitic pogroms, and apparently did so very effectively. Once activated by magical incantation, golems like Josef could move, were aware, and would obey. But with Josef, something went terribly wrong and its behavior changed from lumpen obedience into violent monstering. Eventually the rabbi managed to revoke his spell, upon which his golem fell into pieces in the synagogue grounds. Some say its remains lie hidden in Prague to this day, perhaps in a graveyard, perhaps in an attic, perhaps waiting, patiently, to be reactivated.
Rabbi Loew’s golem reminds us of the hubris we invite when attempting to fashion intelligent, sentient creatures — creatures in the image of ourselves, or from the mind of God. It rarely goes well. From the creature in Mary Shelley’s Frankenstein to Ava in Alex Garland’s Ex Machina, by way of Karel Čapek’s eponymous robots, James Cameron’s Terminator, Ridley Scott’s replicants in Blade Runner, and Stanley Kubrick’s HAL, these creations almost always turn on their creators, leaving in their wake trails of destruction, melancholy, and philosophical confusion.
Over the last decade or so, the rapid rise of AI has lent a new urgency to questions about machine consciousness. AI is now all around us, built into our phones, our fridges, and our cars, powered in many cases by neural network algorithms inspired by the architecture of the brain. We rightly worry about the impact of this new technology. Will it take away our jobs? Will it dismantle the fabric of our societies? Ultimately, will it destroy us all — whether through its own nascent self‐interest or because of a lack of programming foresight which leads to the Earth’s entire resources being transformed into a vast mound of paper clips? Running beneath many of these worries, especially the more existential and apocalyptic, is the assumption that AI will — at some point in its accelerating development — become conscious. This is the myth of the golem made silicon.
What would it take for a machine to be conscious? What would the implications be? And how, indeed, could we even distinguish a conscious machine from its zombie equivalent?”
Rabbi Loew would no doubt be shocked by all the things that have come to pass since his time in terms of technology, but if the tale of the Prague Golem contains any truth at all, he might not be so surprised by the recent advances in artificial intelligence. He certainly wouldn't be so shocked by robots. This is not to say that golems and robots are the same, even overlooking the fact that one is real and the other is not. Not only do they operate on different principles and through different physical media (for now, though “smart substances” not so different from the clay from which golems are allegedly made are also in the pipeline), but computer science operates on a much deeper understanding of the nature of information than Qabbalah does. It's a more thorough “mental model” of consciousness.
Even so, the distinction between “real” and “zombie” AGI will be difficult to prove, if it's possible at all. This makes the fact that people will inevitably become more and more reliant on AI as time goes on and new generations are born under its influence all the more problematic in the long run. My Facebook friend Matthew Sabatine explains it eloquently in his blog, the Common Caveat.
“What really matters? What authenticates knowledge? Our brilliant technologies that purvey vast amounts of information are also struggling to establish those spiritually nourishing answers. We have information but not wisdom, and we lack the insight to realize that our brilliant information is outweighing the wisdom.
Independence may be a virtue, but its excess has made us arrogant and attached to an autodidactic, self-absorbed, self-limiting “religion of me” in which we continually choose our personal biases and echo-chambers that are intensified by social media. This “wisdom famine” is worsening among a growing number of people who are unaffiliated with religion and deride institutionalized sources, even though they do not largely identify as atheist.”
Matthew makes a profound point here, one that's shared, somewhat, by the Harvard Business Review.
“If you ever took a marketing course, you may remember the famous case from the 1950s about General Mills’ launch of Betty Crocker cake mixes, which called for simply adding water, mixing, and baking. Despite the product’s excellent performance, sales were initially disappointing. That was puzzling until managers figured out the problem: The mix made baking too easy, and buyers felt they were somehow cheating when they used it. On the basis of that insight, the company removed egg powder from the ingredients and asked customers to crack an egg and beat it into the mix. That small change made those bakers feel better about themselves and so boosted sales. Today, 70 years later, most cake mixes still require users to add an egg.
We can take a lesson from that story today. As companies increasingly embrace automated products and services, they need to understand how those things make their customers feel about themselves. To date, however, managers and academics have usually focused on something quite different: understanding what customers think about those things. Researchers have been studying, for example, whether people prefer artificial intelligence over humans (they don’t), how moral or fair AI is perceived to be (not very), and the tasks for which people are likely to resist the adoption of automation (those that are less quantifiable and more open to interpretation).
All that is important to consider. But now that people are starting to interact frequently and meaningfully with AI and automated technologies, both at and outside work, it’s time to focus on the emotions those technologies evoke. That subject is psychological terra incognita, and exploring it will be critical for businesses, because it affects a wide range of success factors, including sales, customer loyalty, word-of-mouth referrals, employee satisfaction, and work performance.
We have been studying people’s reactions to autonomous technology and the psychological barriers to adopting it for more than seven years. In this article, drawing on recent research from our lab and reviewing real-life examples, we look at the psychological effects we’ve observed in three areas that have important ramifications for managerial decision-making: (1) services and business-process design, (2) product design, and (3) communication. After surveying the research and examples, we offer some practical guidance for how best to use AI-driven and automated technologies to serve customers, support employees, and advance the interests of organizations.”
As I said, there's *some* overlap there, even if the two “mental models” at play there are aimed towards very different purposes. AI is an inevitability at this point, but how it will ultimately affect us and our world is a question that should be explored.
Kevin Brown of the Christian Scholars has a more realistic take on the possibility of an “AI takeover” than the typical Skynet scenario.
“Fear of AI’s potential to eviscerate humans is no longer originating from science fiction. Comparing AI’s destructive potential to pandemics and nuclear weapons, The New York Times recently suggested human existence, not simply livelihood, is at stake. Before his death, Stephen Hawking warned that AI’s development “could spell the end of the human race.” More recently, various scientists and AI experts signed an open letter requesting a pause on AI developments such as ChatGPT or other Large Language Models. Among other warnings, the letter raised the question of developing “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us.”
As one article read, “It is unusual to see industry leaders talk about the potential lethality of their own product.” When renowned scientists echo the imaginative doom that has marked the science fiction genre for decades, you know the story of malicious machinery is more than just a story. Fear is warranted. The threat is real.
But what if the nature of the threat is not conscious, malevolent AI? What if the threat is more subtle? Artificial Intelligence is not plotting to usurp our agency. More realistically, we will unwittingly give it away.
That is the threat.
And the severity of unconsciously transferring agency to artificial intelligence is heightened by a Christian understanding of personhood. The irreducible, non-transferable essence of humans is that we are created in the image of God (Imago Dei). Part of realizing our humanity and enacting redemptive order is bound up in bearing out God-reflecting attributes such as creativity, productivity, moral reflection, and relational capacities. If this is constitutive of our humanity and integral to human flourishing, then consigning essential elements of our personhood to AI risks a disfigurement of God’s image and our dehumanization.”
An easy example of ‘dehumanization’ *might* be AI relationships. That's almost certainly what Matt Sabatine was talking about, when he wasn't referring to “spiritual but not religious” people like myself.
“In an era where technology and digital innovations shape our lives, a clear trend is emerging: More and more men are actually preferring a relationship with an AI woman over a real-life partner. Creating a seemingly perfect partner with just a click of a mouse might sound too good to be true, right? But what are the real downsides? We wanted to delve deeper into this burgeoning topic and spoke with the founder of the startup c***y.ai, a frontrunner in this field and currently the talk of the web. The young founder provided us with some fascinating insights into this subject.”
The rest of that Mindiscuss article consists of an interview with this young founder, referred to only as “Anna.” Another example of the “dehumanization” Kevin Brown was referring to might be the use of AI speech models to compose or even orate sermons in churches. This has been a trend in the German Catholic Church, and has been a source of tension between the German archdiocese and the Holy See in recent years (along with other issues, like LGBTQ acceptance). Apparently, Pope Francis has undergone a shift, and is taking a more firm stance on these matters than he's been known for since becoming Pope.
Another example might be the gradual loss of the human element in law enforcement and security. Security bots and (as of the last few weeks, since the opening of the war in Gaza) semi-autonomous weapons systems are a reality.
The “civilized” West in particular has sprouted a veritable garden of conspiracy theories, especially over the last decade or so. The Left places most of the Right under the broad umbrella of fascism, and some factions claim there'd be a purge if Trump won back the presidency. After all, news about his clean bill of health is “devastating” to some.
Meanwhile, many of us on the Right believe the Left represents sweeping utopian social engineering, and our rights will continue to erode under Biden. In fact, both of the major American political parties are fractured and largely ineffective. While the Republican-held House of Representatives has focused more on enforcing current laws than passing new ones, On the other hand, people from all over the political spectrum are awaiting the release of the names in the late Jeffrey Epstein's dossier. Everyone wants to know who the elite child molesters are, and many have their own opinions of whose names will be revealed on that list. We'll find out after the new year. 2024 will start off in a worse state than 2023.
https://nymag.com/intelligencer/article/who-are-the-newly-revealed-jeffrey-epstein-associates.html
The GOP has lost all cohesion, and the conflict between the Trump and DeSantis camps is perhaps the most prominent of those in-house battles going on at the moment.
The Gaza War has sent a schism through the Democrat party.
Even that schism is but a microcosm in the larger ideological conflict between the global Right and Left. In many (not all) cases, this conflict stems from the underlying thesis and antithesis of individualism and collectivism.
“Collectivists considered their view about both the medical and moral aspects of COVID-19 vaccinations as generally applicable. This self-centered generalization ignored the concerns of others who, for example, were worried about the uncertainty of vaccine-induced injury in comparison to their individual ability to both minimize infection risk and manage infection consequences. The “rationale” to ignore concerns and views of individuals was based on a true belief—or true religion in Twain’s words—that the proper course of action was to “follow the science” and to “trust the expert advice.” During a CNN town hall in July 2021, U.S. President Joe Biden stated that "you’re not going to get COVID if you have these vaccinations." For many well-meaning collectivists, the President based this advice on the science. As we know, vaccinated people did get COVID but collectivists can still believe in the true science because, to paraphrase Twain, there can be several of them.
Collectivists make the mental step from a belief to a true belief because of their desire for a perfect clarity and certainty. This idea is an overgeneralization, yet it is seducing people to fight against and even to kill those whose different views are disturbing their imagined clarity and certainty.”
American manufacturing has declined to the greatest extent in decades.
Meanwhile, the National Debt continues to increase.
Tension mounts amid continuous power struggles in Texas: women's rights versus fetus rights. So far, those Texas Republicans have drawn a hard line.
On the west coast, the City of Brotherly Love is in a state of decay, practically ruin.
The AI industry is very lucrative right now, to put it mildly. It's fairly safe to say the Federal Reserve and various international banks have a lot of assets sunk into its rollout, isn't it?
http://www.bigskywords.com/montana-blog/the-rothschilds-freemasons-and-illuminati
A recent ruling from the Eighth Circuit Court of Appeals has allegedly “gutted” the Voting Rights Act of 1965.
Against this backdrop of increasing political polarization, terms like “epidemic” and “tsunami” have been used to describe the mental health landscape of the United States. It's at the point where burnout from practice is common among mental health professionals, and they find themselves needing support.
https://teletherapistnetwork.com/?fbclid=IwAR1iZ5zxrfFDiX6SI9XlyG7TLjHtINw3cW4HnEWC6OMNQfVYsxQFZkuHQeM_aem_AcoERoJ1sBRtfvJxeL0YTjFuCGRT-m_kC5Co9jJJAFMhzuJcnN8fZo4HS95tDWX5s4Gvn_ZMusFoBQTmy3e3x_M_
To put it mildly, mental health in America is a work in progress. The psychiatric system has helped many people, but it has failed many too.
Medical authorities are prone to meet the exposure of research fraud and failure to help mental health patients achieve remission and peace of mind with silence than to publicly acknowledge such indiscretions.
“When the STAR*D study was launched more than two decades ago, the NIMH investigators promised that the results would be rapidly disseminated and used to guide clinical care. This was the “largest and longest study ever done to evaluate depression treatment,” the NIMH noted, and most important, it would be conducted in “real-world” patients. Various studies had found that 60% to 90% of real-world patients couldn’t participate in industry trials of antidepressants because of exclusionary criteria.
The STAR*D investigators wrote: “Given the dearth of controlled data [in real-world patient groups], results should have substantial public health and scientific significance, since they are obtained in representative participant groups/settings, using clinical management tools that can easily be applied in daily practice.”
In 2006, they published three accounts of STAR*D results, and the NIMH, in its November press release, trumpeted the good news. “Over the course of all four levels, almost 70 percent of those who didn’t withdraw from the study became symptom-free,” the NIMH informed the public.
This became the finding that the media highlighted. The largest and longest study of antidepressants in real-world patients had found that the drugs worked. In the STAR*D study, The New Yorker reported in 2010, there was a “sixty-seven-percent effectiveness rate for antidepressant medication, far better than the rate achieved by a placebo.”
That happened to be the same year that psychologist Ed Pigott and colleagues published their deconstruction of the STAR*D trial. Pigott had filed a Freedom of Information Act request to obtain the STAR*D protocol and other key documents, and once he and his collaborators had the protocol, they were able to identify the various ways the NIMH investigators had deviated from the protocol to inflate the remission rate. They published patient data that showed if the protocol had been followed, the cumulative remission would have been 38%. The STAR*D investigators had also failed to report the stay-well rate at the end of one year, but Pigott and colleagues found that outcome hidden in a confusing graphic that the STAR*D investigators had published. Only 3% of the 4041 patients who entered the trial had remitted and then stayed well and in the trial to its end.
The protocol violations and publication of a fabricated “principal outcome”—the 67% cumulative remission rate—are evidence of scientific misconduct that rises to the level of fraud. Yet, as Pigott and colleagues have published their papers deconstructing the study, the NIMH investigators have never uttered a peep in protest. They have remained silent, and this was the case when Pigott and colleagues, in August of this year, published their latest paper in BMJ Open. In it, they analyzed patient-level data from the trial and detailed, once again, the protocol violations used to inflate the results. As BMJ Open wrote in the Rapid Responses section of the online article, “we invited the authors of the STAR*D study to provide a response to this article, but they declined.”
In fact, the one time a STAR*D investigator was prompted to respond, he confirmed that the 3% stay-well rate that Pigott and colleagues had published was accurate. While major newspapers have steadfastly ignored Pigott’s findings, after Pigott and colleagues published their 2010 article, Medscape Medical News turned to STAR*D investigator Maurizio Fava for a comment. Could this 3% figure be right? “I think their analysis is reasonable and not incompatible with what we had reported,” Fava said.
That was 13 years ago. The protocol violations, which are understood to be a form of scientific misconduct, had been revealed. The inflation of remission rates and the hiding of the astoundingly low stay-well rate had been revealed. In 2011, Mad in America published two blogs by Ed Pigott detailing the scientific misconduct and put documents online that provided proof of that misconduct. In 2015, Lisa Cosgrove and I—relying on Pigott’s published work and the documents he had made available—published a detailed account of the scientific misconduct in our book Psychiatry Under the Influence. The fraud was out there for all to see.
Pigott and colleagues subsequently obtained patient-level data through the “Restoring Invisible and Abandoned Trials” initiative (RIAT), and their analysis has confirmed the accuracy of their earlier sleuthing, when they used the protocol to deconstruct the published data. Thus, the documentation of the scientific misconduct by Pigott and colleagues has gone through two stages, the first enabled by their examination of the protocol and other trial-planning documents, and the second by their analysis of patient-level data.
Yet, there has been no public acknowledgement by the American Psychiatric Association (APA) of this scientific misconduct. There has been no call by the APA—or academic psychiatrists in the United States—to retract the studies that reported the inflated remission rates. There has been no censure of the STAR*D investigators for their scientific misconduct. Instead, they have, for the most part, retained their status as leaders in the field.
Thus, given the documented record of scientific misconduct, in the largest and most important trial of antidepressants ever conducted, there is only one conclusion to draw: In American psychiatry, scientific misconduct is an accepted practice.
This presents a challenge to the American citizenry. If psychiatry will not police its own research, then it is up to the public to make the fraud known, and to demand that the paper published in the American Journal of Psychiatry, which told of a 67% cumulative remission rate, be withdrawn. As STAR*D was designed to guide clinical care, it is of great public health importance that this be done.”
Modesty standards have changed a lot in recent years, and practically any advice on the subject is met with hostility by women, especially when it comes from massive mansplainers like myself. Yet even some feminist authors have made the point that immodesty triggers the “objectifying” part of a man's brain, as Intellectual Takeout's Aletheia Hitz explains.
“Modesty isn’t something society likes to talk about. Suggest that it might be proper and you’ll probably get an angry glance, and if you’re in the right situation, a snide comment about the patriarchy. “If you don’t like it, don’t look,” many people declare, and everybody else is expected to applaud their astounding show of eloquence.
Researchers and writers Lexie and Lindsay Kite, while admirably polite in presenting their thoughts around dress codes (and, by extension, modesty), accurately reflect the mindset of contemporary society: “[Dress codes] inadvertently sexualize young women as a collection of inappropriate body parts, positioning them as threats to be mitigated at any cost.”
Another writer, Katlyn White, expresses a similar sentiment: “Dress codes teach women, from a young age, that their bodies are to be hidden. … By banning cleavage and thighs, dress codes teach girls that their bodies are objects.” By causing young women to think about their clothes, the argument goes, we imply that those young women are merely bodies.
Certainly, some girls may become inordinately self-conscious by intentionally dressing modestly. It’s always strange to do something different from what we’re used to, and with the increasingly permissive attitudes around clothing and personal presentation, modesty (for most people) is definitely different. But—and this is key—is self-consciousness the primary concern?
In the late 2000s, Susan Fiske conducted a study on the effect of immodest clothing on the male mind. In the study, male subjects viewed images of scantily clad men, scantily clad women, and fully clothed men and women. Reporting on the study, Christie Nicholson noted that—not surprisingly—the subjects were best able to remember the women in bikinis. Not only that, the subjects’ memory “‘correlated with activation in part of the brain that is a pre-motor, having intentions to act on something, so it was as if they immediately thought about how they might act on these bodies.’” In other words, the immodesty present in the women turned on parts of the men’s brain that corresponded with objects. Modesty doesn’t objectify women; immodesty does.”
https://intellectualtakeout.org/2023/03/objectify-women-importance-modesty/
Both conspiracy theories and propaganda are rampant in the Information Age. Elon Musk advertises himself as a “free speech absolutist”, and promotes his products as technology designed by someone who values freedom of expression. He has used his notoriety to resurrect a certain “debunked” conspiracy theory that Democrats would much rather remain dead. After all, it didn't originate in the pizza parlor, it originated in the emails of Clinton Foundation staffers. But as this article from Vice Magazine's Matthew Gault demonstrates, the gaslighting is still strong.
At the same time, his Grok AI is nowhere near as “anti-woke” as advertised.
Meanwhile, gender-creative parenting is a growing trend, especially abroad and on the West Coast of the US. I've seen people from that part of the country say things to the effect of, “You wouldn't talk that way if you lived in the West” to conservatives. This is an indicator of how starkly different the demographics of certain states have become.
https://xtramagazine.com/love-sex/relationships/gender-open-parenting-245816
Just so any hypothetical future readers don't assume I'm conflating correlation with causation here, it should be noted that teachers molesting their students is a problem that transcends party lines, sexual orientation, and even relationship status. This married Louisiana woman spent an entire summer messing with one of her students, often in her marriage bed or in her husband's new vehicle. It's an extreme case, but not as uncommon as it should be.
There are many manifestations of the mental health crisis. One of the most definitive is a term coined in 2002 to summarize the three worst forms of personality disorders: the Dark Triad.
“We all have stories of meeting people who appeared wonderful at first but turned out to be just awful. Perhaps it was a charming suitor, or a charismatic colleague, or a fascinating new friend. They attracted you on initial impression, but before long, you started to notice behaviors that gave you pause. Maybe it was a little shading of the truth here and there, or a bit too much vanity and selfishness. Perhaps they constantly played the victim, or took credit for other people’s work.
Or maybe your disillusionment with the person was not gradual, but through a dramatic—and dramatically unpleasant—episode. All it may take is a minor disagreement, and suddenly, you get screamed at, threatened with retaliation, or reported to HR. This kind of encounter leaves you, understandably, baffled, hurt, and confused.
Very likely, this person was a “Dark Triad” personality. The term was coined by the psychologists Delroy Paulhus and Kevin Williams in 2002 for people with three salient personality characteristics: narcissism, Machiavellianism, and a measurable level of psychopathy. These people confuse and hurt you, because they act in a way that doesn’t seem to make sense. As one scholar aptly described the ones whose behavior shades more obviously into psychopathy, these are “social predators who charm, manipulate, and ruthlessly plow their way through life, leaving a broad trail of broken hearts, shattered expectations, and empty wallets.”
-Arthur C. Brooks
From decades before that (1979) comes another definitive piece of psychological literature, Christopher Lasch's The Culture of Narcissism. Therein, he defines nine specific traits of the “New” (American) Narcissist:
“[He] is haunted not by guilt but by anxiety. He seeks not to inflect his own certainties on others but to find a meaning in life. Liberated from superstitions of the past, he doubts even the reality of his own existence. Superficially relaxed and tolerant, he finds little use for dogmas of racial and ethnic purity but at the same time forfeits the security of group loyalties and regards everyone as a rival for the favors conferred by a paternalistic state. His sexual attitudes are permissive rather than puritanical, even though his emancipation from ancient taboos brings him no sexual peace. Fiercely competitive in his demand for approval and acclaim, he distrusts competition because he associates it unconsciously with an unbridled urge to destroy. He extols cooperation and teamwork while harboring deeply antisocial impulses. He praises respect for rules and regulations in the secret belief that they do not apply to himself. Acquisitive in the sense that his cravings have no limits, he does not accumulate goods and provisions against the future… but demands immediate gratification and lives in a state of restless, perpetually unsatisfied desire. [He] has no interest in the future because, in part, he has so little interest in the past.”
Ironically, as removed in time as they are from us, these characteristics describe the fruits of liberal as well as conservative culture. Lasch was describing a different time in American history, one which was much more lawless in certain ways. Without the pervasive surveillance technology of today (one of the many reasons its creators advocated for it despite the loss of privacy), many forms of abuse went without justice in the American justice system of the times in which Lasch's book was written. Still, it all sounds very familiar when read by someone in the current era.
I'm going to take this reverse timeline to its logical conclusion, even further in the past. This source comes from over a century ago.
“One of the most noteworthy critics of the modern age was the German academic (with a not-so-German name) Romano Guardini (1885-1968). Among others, his thoughts on technology and the environment have influenced Pope Francis, as was apparent in his recent encyclical Laudato Si.
In his essay The End of the Modern World, Guardini claims that the rise of technology and modern man’s changed relationship with nature – which has come to be seen as something to be dominated – has given birth to a new sociological category: the “Mass Man.” ”
Lately, it seems like this general loss of connection and meaning has resulted in a particular decline in the productivity of American workers.
Intellectual Takeout's Daniel Lattier goes on to outline the six main characteristics Guardini used to define this “Mass Man.” They are as follows: lacking unity with the past, a life ordered like a machine, no desire for originality, content to conform, doesn't value freedom, and thinks of himself as an object. Guardini was among the first to use “integration” as a euphemism for the Hegelian concept of synthesis, as his proposed solution was an integration of past values and modern developments. This aspect of his work is probably the part Pope Francis finds so applicable to contemporary issues, and brings us easily back to the present.
In 2017, an article was published by the Atlantic about the effect of smartphones on teenagers. That article, unfortunately, lies beyond a paywall, so I won't post it here. The following quote from a rebuttal published by JStor Daily's Alexandra Samuel is quite appropriate to what I'm trying to get across in this essay, and summarizes the original writer's work as well.
“Quickly, now: Go rip a smartphone out of the hands of the nearest teen. If you have a teen child of your own, you can start there—or if you have kids under 13, you can take away whatever device they’re presently using. Feel free to just tear your TV off of the wall, if that’s all you’ve got to turn off. And if you don’t have kids, snatch a phone from any teenager who happens to walk by.
If that level of panic feels overblown, then perhaps you missed the latest story to spread a message of tech alarm to the world’s online parents. Writing in The Atlantic, Jean Twenge warns that “the twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.”
Beginning with its provocative title, “Have Smartphones Destroyed a Generation?”, the article sets us up to feel hopeless about the way mobile and social media has turned Kids These Days into lonely, depressed screen addicts who are failing to advance along the established path to adulthood.
It’s not that Twenge’s got her story wrong; on the contrary, it’s precisely because she’s onto something that we need to be so careful about drawing the right conclusions from the evidence she cites. Even more crucial—and missing not just from Twenge’s work, but so many of these alarmist pieces—is the so what: what, exactly, are parents supposed to do about the problem?
Don’t worry, I’ll get there.”
https://daily.jstor.org/yes-smartphones-are-destroying-a-generation-but-not-of-kids/
Essentially, Alexandra argues that the problem originates more from neglectful parents who are addicted to social media, something I've needed to check myself on in the past. Her take comports with the observations of growing numbers of people who have chosen not to have children “in a world like this.”
As of 2021, America was still deeply divided on the topic of race relations, and going by the words of African Americans I know personally, that hasn't changed. Without wading too deeply into that thorny thicket, I'll say that certain geographic regions of the US are a lot worse about this than others. Thomas Sowell and others have given eloquent explanations for why that is. As a result, activist groups like Antifa and BLM are also more militant in some states than others. This adds to the ongoing mental health “tsunami” through racial anxiety, and the ideology associated with these movements adds to that anxiety with concepts like internalized racism.
Few things have created more of a rift in western society or created cascading effects on the state of public mental health than the word ‘Woke’ and its manifold connotations. Al Jazeera's Johnny Luk gives an outside (non-western) perspective on this below:
“In the West, the term “woke” has become a lightning rod on both the left and the right – a symbol of a modern culture war.
But its origins are far from modern. It first emerged in the US in the 1940s from the word “awake” and was used to describe someone who is well-informed on issues of social injustice – particularly racism. In its original use, it meant being alert to the specific discrimination and systemic harm suffered by African Americans. Thus, being “woke” implies one has “awakened” from a slumber, rather like the protagonist, Neo, after being unplugged from the Matrix in the movie of the same name. More recently, it has been adopted as a ubiquitous watchword for a wide variety of social movements, including LGBTQ issues, feminism, immigration, climate change and marginalised communities.
But this broad use of the term has caused it to become heavily weaponised by both the left and the right, turning what was once a welcoming creed into a toxic and divisive word, particularly in Western countries including the US, Canada, the UK and other European nations. This toxicity is in large part due to activists failing to develop the necessary coalitions to instil the change campaigners are advocating for.
This is a shame because the messages of inclusivity and diversity underpinning “wokeness” should not be so easily dismissed.
So why has the term “woke” become so divisive? The trouble starts when campaigns over-reach, alienating moderate supporters. It is easy to see how this has happened. Examples include toppling statues of wartime leader Winston Churchill – cherished as a hero by many in the West – or companies being “advised” to stop using the word “mother” by LGBTQ+ charity Stonewall, which operates a “Workplace Equality Index” in the UK. To large swaths of the general public, this all screams of political correctness gone mad.
Wokeness also implies that those not in the club are asleep, deluded or wrong. This instant judgement forms a dividing line, forcing the other side to become defensive and further entrenching the debate. The moral superiority platform is hardly a way to bring sceptics on board, especially when wealthy and privileged campaigners who have co-opted wokeness do not even follow their own standards.”
https://www.aljazeera.com/opinions/2021/6/24/what-is-woke-culture-and-why-has-it-become-so-toxic
I would go so far as to say that's the best assessment of Wokeness I've ever seen. As is explained in Matthew 7:16, Johnny Luk has listed out some of the “fruits” of Wokeness. These are the immediately obvious ones. Others come under the auspices of those whose financial goals are integrated with it, those who are most invested in social engineering.
“It’s near impossible to have a one-world government, often called New World Order, despite what many alarmists may have you believe. However, if you can comprise numerous ostensibly separate systems, all meticulously designed to be fully “interoperable,” then you don’t need to have a one-world government to implement global policy.
You just need something that consists of a single unified healthcare system, identity database, and/or digital currencies – all of which are in the works.
How is this even possible?
It should come as no surprise that many of the same players involved in ID2020 are involved in 50-in-5:
“This country-led campaign is in collaboration with the Bill & Melinda Gates Foundation, Centre for Digital Public Infrastructure, Co-Develop*, the Digital Public Goods Alliance, and the United Nations Development Programme (UNDP) and is supported by GovStack, the Inter-American Development Bank, and UNICEF.”
*funded by the Rockefeller Foundation.
Could all of this be a coincidence? And can the influence of such organizations be so powerful to affect government at all levels?
Just one day before 50-in-5 officially launched, the Council presidency and European Parliament representatives reached a provisional agreement on a new framework for a European digital identity (eID).
Via the Council of the EU and the European Council:
“Under the new law, member states will offer citizens and businesses digital wallets that will be able to link their national digital identities with proof of other personal attributes (e.g., driving licence, diplomas, bank account). Citizens will be able to prove their identity and share electronic documents from their digital wallets with a click of a button on their mobile phone.” ”
The above quote and the following two are from a blog post by Carlisle Kane, a man whom many no doubt deem a conspiracy theorist.
“One could argue that it would be great to never have to carry a wallet again. I misplace mine all the time. However, that small inconvenience may be an insanely worthwhile price to pay if everything goes digital, handing complete control over to the government – or whoever is controlling the government(s).
One look at the COVID pandemic and you’ll see what I mean. Remember when Covid-19 vaccine passports were introduced, restricting travel rights – despite no proof that they prevented transmission?
The new digital wallets – combined with both your bank accounts and IDs – will push us even further in the direction of control and limitation.
Having all documents consolidated in one place makes them vulnerable to being seized with a single click. Don’t think this won’t happen? It already has.
Remember when Trudeau’s administration froze the bank accounts for those who protested the COVID vaccine, and subsequently even revoked insurance rights for drivers involved in the protest blockade in Ottawa?
Imagine what governments can do if everything was consolidated digitally and interoperably. Countries would lose autonomy and the powers-that-be would maintain full control. Just ask the World Health Organization as it attempts to take control of the world’s healthcare system via an international pandemic treaty.
Imagine a global lockdown forced onto every country. Imagine a globally implemented vaccine program. Imagine losing access to your entire identity or net worth because you don’t comply.
This will be the future if you continue to vote for governments in favour of such actions.”
While I honestly have yet to read any really deranged conspiracy theories about AI technology, I have no doubt they're forthcoming. AI is what makes all this more feasible now than ever before.
“You see, neither the U.S. nor China benefit from the coronavirus. Both economies have been hit extremely hard, and both are now resorting to extreme measures to protect not just their economies, but every one of their citizens.
When looking for answers, the simplest solution is to look from the top down: who really benefits from the coronavirus?
Not the U.S. Not China. Not any country, actually.
But the players involved in a global vaccination program and a cashless society certainly do, as do the banks who lend money to the most powerful nations in times of massive stimulus.
Think of the people involved in the creation of global vaccines and ID implants. They’ll have access and control over every individual’s vaccination records and finances.
And then think of the Federal Reserve, who will be lending trillions of dollars to the U.S.
When you look at it that way, is China or the U.S. really in control?
Given the massive stimulus measures now underway, I have no doubt that gold and other hard assets will skyrocket in value in the coming years. Furthermore, when money goes fully digital via biometric implants, hard assets such as precious metals will become the only real form of off-the-record wealth.”
Overall, it's becoming more and more clear that subjective belief systems affect the outcomes of mental health. A paranoid worldview brings misery, much like a perpetually offended one. The work of Luisa Fassi and a research project she's been heading up at the Alan Turing Institute has provided hard data showing how a patient's beliefs about the efficacy of mental health treatments has a direct impact on that efficacy.
“The researchers examined data from four studies involving clinical patients (people with health conditions) and healthy adults. These studies used different types of neurostimulation treatments. They found that people’s subjective experience—what they thought they were receiving—could explain the differences in how well they responded to treatment better than the actual treatment itself.
Study 1: Focused on patients with treatment-resistant depression who underwent Repetitive Transcranial Magnetic Stimulation (rTMS). Researchers found that participants’ beliefs about whether they were receiving active or sham (placebo) treatment significantly influenced their depression outcomes. Patients who believed they were receiving active treatment showed more improvement than those who thought they were receiving sham treatment, regardless of the actual treatment they received.
Study 2: Examined older adults with late-life depression treated with high-dose deep rTMS. Similar to Study 1, the beliefs of participants about the treatment they received (active or sham) affected their depression scores more than the actual treatment.
Study 3: Investigated home-based electrical stimulation (tDCS) treatment in adults with ADHD. The results indicated that participants who believed they were receiving active treatment showed more significant improvements in attention symptoms than those who believed they were receiving sham treatment. In this study, the actual (objective) treatment also significantly affected the outcomes.
Study 4: Extended the research to healthy participants, testing the effects of different doses of tDCS on mind-wandering. The study found that participants’ beliefs about the treatment type and stimulation strength (subjective dosage) significantly influenced their mind-wandering levels. The actual treatment (objective treatment) did not significantly affect the results.
Across all studies, the researchers observed that patients’ subjective beliefs about their treatment played a crucial role in the effectiveness of neurostimulation therapies. This suggests that in addition to the actual treatment, the perception and beliefs of patients about their treatment can significantly influence health outcomes.
Researchers have typically focused on whether participants could correctly guess if they received the actual treatment or a placebo to determine the success of the blinding in trials. This new research challenges this approach, suggesting that it’s not just about whether participants can guess the treatment correctly, but their individual beliefs about the treatment they think they’re receiving can also affect the results.”
In a worldview where human well-being is the highest priority, any possible measure to improve that is ethically permissible. This is the onus and justification behind most ongoing social engineering efforts. This, of course, creates more of an onus to try to manipulate the personal beliefs of the population as much as possible.
Many of the theories on human nature that underlie these social engineering initiatives are flawed, and there is no stronger evidence of that than the social tension and polarization they cause. The application of such theories on a wide scale may well lead to a third world war. That will be the topic of the next section.
Part 4: Thucydides’ Trap
The following quote from a short Substack published by Michael Sherlock, an Australian classic liberal atheist philosopher, puts the current international social landscape in a nutshell:
“One of my concerns about the Israel-Palestine conflict is its potential to radicalize formerly liberal Jews and Muslims, and socially and psychologically manipulate them into adopting the more noxious and dangerous strains of their religions.
We appear to be seeing, at present, the radicalization of almost all identity groups on the planet. The Left and Right in Western democracies are being radicalized - LGBTQIA+ groups are being radicalized - anti-LGBTQIA+ groups are being radicalized - climate change groups are being radicalized - anti-climate change groups are being radicalized - Human Rights groups are being radicalized - anti-Human Rights groups are being radicalized - every identity group appears to be building their barricades and drafting their fundamentalistic manifestos for war with each other.
Further, and on top of this splintered radicalization, we also appear to be witnessing the splintering of these tribes themselves, which is causing a kind of widespread chaos at both a societal level and an individual, psychological one, leading to a seemingly rapid increase in mental illness.
Simultaneously, we also appear to be witnessing increased government corruption and the rapidly increasing erosion of individual freedoms, particularly those fundamental and foundational ones of thought and speech.
History teaches us that this trend commonly precedes a major global conflict, from the ancient Peloponnesian War of the 5th century BCE to WW2.
I think the only antidote to this seemingly impending catastrophe is to consciously place our shared humanity in the foreground and our splintered group identities in the background.”
Sherlock's apprehensions are shared by many. There are ongoing theological debates among practicing Jews over whether the ongoing Hamas war is the prophesied war of Gog and Magog. Rabbi Jack Abramowitz delves deep into some rabbinical esotericism before delivering the following statement:
“Let’s talk about the timing of the war. The Tur (OC 490) cites a tradition in the name of Rav Hai Gaon that the war of Gog and Magog will begin in Tishrei, which this war certainly did.
However… from context it seems that the war is meant to begin on the holiday of Succos. The Vilna Gaon appears to say that the war of Gog u’Magog will begin on Hoshana Rabbah, which is the last day of Succos. (According to the Zohar in parshas Tzav, this is also the day on which the nations of the world are to be judged, which dovetails with Yechezkel’s prophecy.) You might think it’s splitting hairs, but the current hostilities started on Shemini Atzeres, which is the day after the last day of Succos.
Let’s complicate matters further. The US bombed the Taliban in Afghanistan on October 7, 2001 (as the US reckons time), which coincides with Hoshana Rabbah (in Israel, which is seven hours ahead). Rav Yitzchok Kaduri, ztz”l, claimed that this event was the war of Gog u’Magog. According to him, the war in question would have already happened.
Let’s take matters one step further. The Malbim on Ezekiel 39:8 says that the timing of the war of Gog u’Magog was not revealed to any prophet; it’s a “sealed” matter. If we subscribe to the Malbim’s position, the last few paragraphs have been rendered moot.
There’s a Magog listed among the grandsons of Noach, from whom the nations of the world descend. Shall we assume that the Magog of Ezekiel is the same nation? I don’t know. The Abarbanel says that the modern descendants of the Magog of Genesis were scattered to Italy, France and Spain. Those nations aren’t combatants in this particular conflict, so there’s not much help there.
There’s also the matter of Moshiach ben Yosef. Moshiach ben Yosef is a sort of a “proto-messiah” who is foretold to be a great leader, but who will be killed in battle with Gog and Magog. This will lead to great mourning, as described in Zechariah chapter 12 (see Succah 52a). To my knowledge this hasn’t happened yet and I hope it never does.
You see, throughout history there have been events. Some of them are huge and some of them are terrible. A natural human reaction to such things is to try to fit them into Biblical prophecies. This is all but impossible to do while the events are unfolding. As my wife likes to say, you can’t see the picture when you’re inside the frame. But hindsight is 20/20. (Consider how Rabbi Akiva thought that Bar Kochba might be the messiah, but we can see clearly now that he wasn’t.) Many Biblical prophecies have been fulfilled within the past few decades; they’re easier to recognize when you can see the whole picture and not just a corner of it. So is this Gog and Magog? I, personally, don’t think so, but I guess we’ll see!
It should be noted that there are two messianic scenarios – the “easy way” and the “hard way.” It’ll happen eventually and when it does, we’ll get the scenario that we deserve. But if we play our cards right, we won’t need a Moshiach ben Yosef or a war of Gog and Magog at all.”
On the other hand, there are those who either interpret biblical prophecies radically differently or don't believe in them at all, and who see Israel as a loose cannon aggressor in the Gaza conflict. Some of the less theologically motivated among them see the nation as a nuclear liability in the Middle East.
“Many are concerned that Israel’s cruel war on Gaza, if it were to expand regionally to include Hezbollah in Lebanon, would drag Iran, a prominent Hezbollah supporter, into the fray. And that, in turn, might be all the justification Netanyahu would need to strike Iran’s supposed nuclear sites. In fact, in response to drone and rocket attacks on American personnel in Iraq and Syria by Iranian-backed militants, the U.S. recently destroyed a weapons facility in Syria.
As for the situation in Gaza, right-wing Heritage Minister Amihai Eliyahu, a member of Netanyahu’s coalition government, recently commented that “one way” to eliminate Hamas would be the nuclear option. “[T]here’s no such thing as innocents in Gaza,” he added. In response to those comments, Netanyahu suspended Eliyahu — a largely meaningless act — in an attempt to quiet criticisms at home and abroad that the war was harshly impacting innocent civilians. Or, perhaps, it had more to do with Eliyahu inadvertently admitting to Israel’s nuclear capabilities.
No doubt fearing a broader war in the Middle East, the Biden administration is committing itself heavily to Israel’s efforts to eliminate Hamas: not only by delivering interceptors for its Iron Dome missile defense system and upwards of 1,800 Boeing-made JDAMs (guidance kits for missiles) but also by replenishing stocks of weapons for Israel’s American-made F-35 fighter jets and CH-53 helicopters as well as KC046 aerial refueling tankers. In addition, two U.S. aircraft carrier task forces have been deployed to the Middle East, as has an Ohio-class nuclear submarine. To top it off, according to a New York Times investigation, the U.S. is providing commandos and drones to help locate Israeli (and American) hostages in Gaza.”
The Gaza conflict in particular has affected a lot of people. The world seems to be slowly turning against Israel… and that trend only seems to be speeding up. It's being felt as far away as Hawaii, as the Christian Science Monitor's Jackie Valley tells us.
“How do you create “compassionate global citizens”? That’s the question facing U.S. schools in the throes of the Israel-Hamas war.
What that looks like at the Kawananakoa Middle School in Honolulu is students comparing and contrasting a natural disaster – the deadly Maui wildfire in August – with the human-created conflict in the Middle East. It also includes a teacher and student teacher pairing up to offer a lesson on recent history in Eastern Europe and the Middle East.
Teachers are not telling students what or how to think about the complex situation in Israel and Gaza, says Vice Principal Bebi Davis. Instead, they are nurturing intellectually curious students.
“You don’t want to keep them so sheltered that when they’re faced with a challenge, they don’t know how to balance their thoughts and emotions,” Dr. Davis says.
The approach offers a window into how K-12 educators are grappling with teaching amid a divisive global conflict. High schoolers are staging walkouts, student journalists are writing editorials using the term genocide, and in at least one case, students have threatened a teacher. School responses to spikes in antisemitism and Islamophobia in the United States have ranged from beefed-up security and intentional lessons – like those at Kawananakoa Middle School – to leaning into affinity groups to help foster understanding.”
“Zainab Chaudry, director of the Council on American-Islamic Relations office in Maryland, says the group has received reports of female Muslim students wearing hooded sweatshirts in place of their traditional hijabs, while others are shying away from revealing their Palestinian American identity to classmates.
When Ms. Mustafa’s sixth grade nephew wore his keffiyeh, a traditional Palestinian garment, to school, another student called him a terrorist, she says.
“This has been a turning point in our communities, where now we are beginning to realize the importance of speaking up,” Ms. Mustafa says. In the case of her nephew, he told a guidance counselor.
Jewish schools, too, have increased security amid threats. And Jewish students have been subject to bullying and harassment, especially as their peers are “blurring the lines” between antisemitic rhetoric and criticism of Israel, says Aaron Bregman, director of high school affairs for the American Jewish Committee.
“Just because they’re Jewish, students are being essentially connected to the Israeli government,” he says. “It gets them scared. It gets them intimidated.”
Apart from hardening physical infrastructure, schools are sorting out how to start and continue conversations.
Some schools or districts have remained silent, forgoing any public statements or sidestepping discussion with students. In New Jersey, the Palestinian American Community Center questioned one district seemingly taking that approach, and after an inquiry, administrators changed course, says Ms. Mustafa.
At the very least, she says, schools should be providing mental health support to students affected by the conflict. But she also sees schools as a “controlled environment” where children can discuss and learn about these challenging topics – as long as they’re not politicized in favor of one side.”
About a month ago, a pro-Israel teacher in New York City “incited a riot” by expressing an opinion.
“The pre-planned demonstration took place after the teacher shared a photograph of herself at a pro-Israeli demonstration on social media, holding a poster from the American Jewish Committee reading "I stand with Israel."
Tensions erupted globally on October 7 when Hamas fighters launched a massive surprise attack on southern Israel, killing 1,200 people and kidnapping another 240. Israel hit back with an air and ground campaign that has killed more than 12,700 Palestinians, The Associated Press reported, citing the Hamas-run Health Ministry.
According to The New York Post 25 officers from the New York Police Department (NYPD) attended the high school disturbance to regain order and the following day police arrested an 18-year-old for allegedly posting threats in a group chat.
Adams shared the New York Post article on X, formerly Twitter, adding: "The vile show of antisemitism at Hillcrest High School was motivated by ignorance-fueled hatred, plain and simple, and it will not be tolerated in any of our schools, let alone anywhere else in our city. We are better than this." ”
Next, Ani Wilcenski gives her account of her last few years at Harvard in Tablet Magazine. This brings the “fruits of Wokeness” further into focus.
“I could go on and on about the bizarre climate that took hold on our campus in 2020. But for now, I am raising these examples to illustrate only one thing: the flagrant disparity between the way my learned classmates treat injustice, suffering, and pain when it belongs to any other marginalized group, and the way they treat injustice, suffering, and pain when the victims are Jews.
Just the other day, my brother, a senior at Cornell, was sitting on his couch with a friend, who made exactly this point about a swastika-emblazoned car that was terrorizing people in his hometown (an incident which barely cracked the non-New York Post news). But my brother and his friend, like most people, are making such points in private, grousing among confidantes and then going back to their campuses or workplaces or friend groups like all is well. The unfairness is so endemic that to even point it out is to launch headlong into a brick wall of condescending and moralizing dismissal.
That was me at Columbia, swallowing my frustration and hurt at how unfair it felt that I was stuck chilling with a girl who shouted over Holocaust commemorations, while everyone else got to receive the balm of immediate social action at the slightest whisper of feeling unsafe. Now, I see that same girl grinning with bloodlust at rallies in the city where we both live, as the campus we once shared erupts into unrest and my brother informs our family chat that Cornell’s Jewish Center for Jewish Living is under lockdown while the FBI investigates online threats to “follow Jewish people home and slit their throats.”
I've said many things about sociologists in previous posts. Those claims were made well prior to October 7th and the onset of all the rippling social unrest from the Gaza War.
Recent events have only served to validate those claims.
“chaired the university’s sociology department from 2017 to 2021, was charged this week with mischief over $5,000 and conspiracy to commit an indictable offence by Toronto Police. The allegations relate to red paint thrown on doors and windows at the Bay-Bloor outlet of Indigo, Canada’s largest bookstore chain, and posters depicting its founder and chief executive officer, Heather Reisman, on a fake book cover entitled Funding Genocide.
York University has been prominent in campus responses to the Israel-Hamas war. Three student organizations issued a joint statement shortly after the Oct. 7 atrocities committed by Hamas, describing what occurred as an act of Palestinian resistance against “so-called Israel.” The university administration later began a process that could lead to withdrawal of recognition of the student groups.”
This is the York University of Toronto, mind you.
“The incident at Indigo sent shock waves through the Jewish community, said Bernie Farber, founding chair of the Canadian Anti-Hate Network.
He called it a “classic case” of antisemitism, and said in the Jewish community it evoked memories of Kristallnacht, a 1938 attack on Jewish businesses and homes in Nazi Germany.
“I think people have to put themselves in the shoes of the Jewish community, the shoes of trauma. When we see this kind of attack on Jewish businesses, we see our history flashing before our eyes. That is what we feel and that is what we see.”
But pro-Palestinian activists said it was aimed at highlighting Ms. Reisman’s support of Israel, including her co-founding a scholarship fund for foreign soldiers who enlist in Israel’s army. Rachel Small, a member of the Jews Say No To Genocide Coalition, knows some of the accused and said groups of police officers broke open the doors of homes before dawn and arrested some individuals in front of their children. She called it an attempt to intimidate the protesters. “It’s just terrifying to see the abusive police repression over the past two days,” said Ms. Small, who demonstrated with others outside a police station Wednesday until all the accused were released.”
Many different historical voices have made the effort to conflate Judaism with Communism. Ironically, Zionism is a “colonial” ideology that arose among German Ashkenazi Jews around the same time Karl Marx was writing his political commentaries. They are diametrically opposed to each other, yet both are widely attributed to the Jewish nation.
It's important, in my view, to note that Judaism is distinct from both, though both claim to carry ancient Hebraic principles into modern politics. In fact, they represent rival Jewish ideological factions.
https://m.vk.com/wall602361295_702
It's also important to understand the historical origins of Wahhabism, its role in Saudi politics, and the different versions of it that define geopolitical tensions in the Muslim world.
“With the advent of the oil bonanza — as the French scholar, Giles Kepel writes, Saudi goals were to “reach out and spread Wahhabism across the Muslim world … to “Wahhabise” Islam, thereby reducing the “multitude of voices within the religion” to a “single creed” — a movement which would transcend national divisions. Billions of dollars were — and continue to be — invested in this manifestation of soft power.
It was this heady mix of billion dollar soft power projection — and the Saudi willingness to manage Sunni Islam both to further America’s interests, as it concomitantly embedded Wahhabism educationally, socially and culturally throughout the lands of Islam — that brought into being a western policy dependency on Saudi Arabia, a dependency that has endured since Abd-al Aziz’s meeting with Roosevelt on a U.S. warship (returning the president from the Yalta Conference) until today.
Westerners looked at the Kingdom and their gaze was taken by the wealth; by the apparent modernization; by the professed leadership of the Islamic world. They chose to presume that the Kingdom was bending to the imperatives of modern life — and that the management of Sunni Islam would bend the Kingdom, too, to modern life.
But the Saudi Ikhwan approach to Islam did not die in the 1930s. It retreated, but it maintained its hold over parts of the system — hence the duality that we observe today in the Saudi attitude towards ISIS.
On the one hand, ISIS is deeply Wahhabist. On the other hand, it is ultra radical in a different way. It could be seen essentially as a corrective movement to contemporary Wahhabism.
ISIS is a “post-Medina” movement: it looks to the actions of the first two Caliphs, rather than the Prophet Muhammad himself, as a source of emulation, and it forcefully denies the Saudis’ claim of authority to rule.
As the Saudi monarchy blossomed in the oil age into an ever more inflated institution, the appeal of the Ikhwan message gained ground (despite King Faisal’s modernization campaign). The “Ikhwan approach” enjoyed — and still enjoys — the support of many prominent men and women and sheikhs. In a sense, Osama bin Laden was precisely the representative of a late flowering of this Ikhwani approach.
Today, ISIS’ undermining of the legitimacy of the King’s legitimacy is not seen to be problematic, but rather a return to the true origins of the Saudi-Wahhab project.
In the collaborative management of the region by the Saudis and the West in pursuit of the many western projects (countering socialism, Ba’athism, Nasserism, Soviet and Iranian influence), western politicians have highlighted their chosen reading of Saudi Arabia (wealth, modernization and influence), but they chose to ignore the Wahhabist impulse.
After all, the more radical Islamist movements were perceived by Western intelligence services as being more effective in toppling the USSR in Afghanistan — and in combatting out-of-favor Middle Eastern leaders and states.
Why should we be surprised then, that from Prince Bandar’s Saudi-Western mandate to manage the insurgency in Syria against President Assad should have emerged a neo-Ikhwan type of violent, fear-inducing vanguard movement: ISIS? And why should we be surprised — knowing a little about Wahhabism — that “moderate” insurgents in Syria would become rarer than a mythical unicorn? Why should we have imagined that radical Wahhabism would create moderates? Or why could we imagine that a doctrine of “One leader, One authority, One mosque: submit to it, or be killed” could ever ultimately lead to moderation or tolerance?
Or, perhaps, we never imagined.”
In this time of mounting sectarian tension, historical accounts of Jews who made moral compromises during WWII with various justifications (survival chief among them) have resurfaced in many news feeds.
https://www.tabletmag.com/sections/arts-letters/articles/ellen-feldman-nazi-germany
As someone who's seen evidence of Zionist extremism, I can't completely discount the following:
“It was much easier to rectify the Nazi evil vis-a-vis a Zionist movement… it was less complex and, more importantly, it did not involve facing the victims of the Holocaust themselves, but rather a state that claimed to represent them. The price for this more convenient atonement was robbing the Palestinians of every basic and natural right they had and allowing the Zionist movement to ethnically cleanse them without fear of any rebuke or condemnation.”
-Israeli historian Ilan Pappé, The Ethnic Cleansing of Palestine (2006)
Views like those of the author of the previously quoted article are common in the Arab and/or Muslim world.
“Living as we do in a world where countries like the United States maintain a permanent warfare state, we must reckon with the horrific cost of war — and the obscene profits. The Merchants of Death War Crimes Tribunal notes that weapons makers’ stocks on Wall Street have risen 7 percent since the Israel-Hamas war started. Recognizing that war never sleeps, we must keep our eyes wide open and acknowledge the horrendous toll as well as our responsibility to build a world beyond war.
As much as we might long to grasp the hand of the child trying to free herself from underneath a collapsed building’s rubble, we need to imagine and long for the chance to grasp the hand of someone outside our own community, someone we’ve been taught to regard as an enemy or an invisible “other.”
Writing these words from a safe, secure spot feels hollow, but in my memory I return to the pediatric ward of an Iraqi hospital when Iraq was under a siege imposed by U.S. and U.N. economic sanctions. Agonized and grieving, a young mother, her world crashing in on her, wept over the dying child she cradled. I came from the country that forbade medicine and food desperately needed by each of the dying children in this ward. “Believe me, I pray,” she whispered, “I pray that this will never happen to a mother who is from your country.” ”
Meanwhile, there are those who observe the schism in the Democrat party from the Arab world, and have joined cause with western progressives because of certain parallels within their worldviews. There are many socialists and several socialist nations in the Muslim world, after all.
“As the left wing of the Democratic Party continues to diverge with the party establishment over Israel's war in Gaza, some of the US House's most outspoken critics are already facing primary challengers, with the conflict being used as a pretext.
Among the main targets are progressive Democrats Jamaal Bowman, Cori Bush, Summer Lee, Ilhan Omar and Rashida Tlaib, as well as Republican Thomas Massie for his recent "No" votes on multiple resolutions related to the conflict.
Though this news could be expected as primary season draws closer, the amount of money that will reportedly be spent to help oust these progressive incumbents, estimated at around $100 million for AIPAC and its affiliates, is alarming to progressives whose grassroots campaigns will have a hard time competing financially.
Moreover, targeting relatively young members of Congress, most of them women of colour, could be counterproductive to the Democratic Party, which could lose support from these incumbents' communities.”
https://www.newarab.com/analysis/aipacs-vast-campaign-unseat-pro-palestine-us-lawmakers
Every war has its casualties, which means first and foremost that lives are lost, but can also refer to the destruction of natural or cultural landmarks. Well over a thousand years’ worth of history has been pulverized in Israeli bombardment over the last few weeks.
https://www.newarab.com/analysis/how-israels-war-erasing-gazas-history-and-culture
Meanwhile, at the other end of Asia, the chip business is booming in Taiwan as demand for AI training chips rises among almost all client companies.
Meanwhile, in Beijing (about a month ago now):
“China’s top diplomat welcomed four Arab foreign ministers and the Indonesian one to Beijing on Monday, saying his country would work with “our brothers and sisters” in the Arab and Islamic world to try to end the war in Gaza as soon as possible.
The ministers from Saudi Arabia, Egypt, Jordan, the Palestinian Authority and Indonesia chose to start in Beijing a tour to permanent members of the United Nations Security Council, a testament to both China’s growing geopolitical influence and its longstanding support for the Palestinians.
The ministerial committee stressed Monday the need for an immediate stop to “military escalation” in Gaza and to propel the political process forward with the goal of lasting peace, as well as “hold the Israeli occupation accountable for the blatant violations and crimes in the Gaza Strip and occupied West Bank,” according to a statement published by the Saudi foreign ministry on X, formerly known as Twitter.”
“China has proven that it can be a beneficial partner, above all, financially, to numerous Middle Eastern countries looking to expand infrastructure and technological investments, as well as diversify their economies. The United States still holds reservations about China’s true, long-term economic and political intentions, and will, therefore, strive to prove itself a dependable partner who is willing to go beyond its traditional role as a security enforcer. In order to achieve this aim, it is likely that we will see the US get involved in the construction and well-being of the Middle East’s economic and political landscapes, in addition to that of regional security. Regardless, China’s presence in the Middle East, as an investor, trading partner and diplomatic facilitator will only continue to grow.”
Many analysts have tried to discourage paranoia about recent nuclear weapons upgrades by telling people to look on the bright side. After all, isn't it a good thing for the payloads of nuclear weapons to be kept in safe, modernized containment?
However, even the most optimistic of those analysts say weapons *upgrades* are most definitely something to be concerned about.
Even more indicative of the divide which now runs through the heart of the Democrat party is Biden's move to fund both Ukraine and Israel's ongoing war efforts in the coming year.
“A recent vote for emergency military funding that would have traded extremist demands from the Republican Party on immigration for billions of dollars in military aid to Israel, Taiwan and Ukraine failed in the Senate.
But now, some Democrats have signaled that they’re willing to trade human rights for more war funding — telling Republicans that they are ready to gut asylum protections and aid in mass deportations in exchange for those billions to send to Ukraine and Israel amid the latter’s atrocious war on Gaza.
The policies that Democrats are considering within this bill are truly horrendous and would result in permanent changes to our immigration system: nationwide expansion of expedited removals, which is a fancy way of saying mass deportations; ending humanitarian parole; restarting Title 42; and agreeing to the so-called “Safe Third Country” restrictions — the Asylum Ban redux.
Biden and Democrats are looking to resurrect — and in some cases expand — the worst policies of the Trump administration. Immigrant families who have been here for years, even decades, could be rounded up and deported. On the table also: A beefed-up version of Title 42 — a Trump-era emergency authority enacted under the pretense of pandemic-related health concerns that speeds deportations at the southern border. The “Safe Third Country” restrictions are just the asylum ban creeping up under some centrist think tank rebranding.
All this is to, in part, provide substantial military aid for more war in Israel and Ukraine. So we have to ask, who the hell is this deal supposed to be for, anyway?
Politically, there’s no electoral benefit for the Democrats here. Republicans and the compliant right-wing media ecosystem are going to say Democrats are “soft” on the border no matter how draconian a deal they make. There is no amount of “getting tough” that will satisfy Republicans. And shredding immigrant rights is a surefire way to turn off substantial portions of the Democratic base.”
Thomas Kennedy and Corey Hill, the writers of that article, are of the opinion, like many in the West, that what Israel is doing is nothing less than genocide, and that any funding at all makes the US an accomplice to such atrocities. It's really ironic how rapidly “anti-Zionism” has spread among progressives, especially considering how they've never accepted defenses of “anti-Zionism, not antisemitism” coming from alt Right activists. They expect their own views to be seen with a level of nuance they have consistently denied others “on principle.” David Nabhan of the Times of Israel gives a strong case in point here.
“The largest city in the USA is under siege by something very close to an anti-religious, Western-hating, civilization-despising deathcult, and it’s going to take more than just turning the other cheek to save New York from the chaos into which extremist insanity is inexorably pulling the city and taking the rest of North America with it. New York’s public schools are a disgraceful example of the rot, turning out legions of math illiterates and blaming the outrage on the discipline itself, since the novel lunacy is that math is “racist!” Citizens can’t count either on the libraries of the great metropolis to nurture their children’s education—and instead must hustle them away from drag-queens invited to read to them during storybook sessions.
In New York the American flag is a negative and “triggering” icon, the country’s Founding Fathers considered racist, murdering villains instead of giants and heroes. Objectosexuality, ecosexuality, and many other lurid perversions are normalized and validated. Yet where the far-Left enemies of humanity focus their greatest attention, where their claws and fangs are most vigorously embedded is in the media.
Only days ago the November 21, 2023 front page New York Times’ feature image, for example, was a surreal graphic of an “unstable person” lingering at a subway station. The article it illustrated, “Behind Acts of Violence, Years of Mistakes,” instead of offering condolences to innocent passengers being pushed in front of trains and/or assaulted and menaced by random and out-of-control strangers haunting the transit system, saved its sympathy for the assailants. It purported that the general public’s failure to create a “safety net” for the “homeless mentally ill” is why New Yorkers still daring to take the public transport, now having been surrendered to the criminal chic, have become targets for anarchistic mayhem.
The Times has long since fallen from heroic heights to the yellow journalistic gutters through which it now stumbles, and after decades of faux-press articles standing shoulder to shoulder with the world’s great adversaries of Judaism and Israel—Al Sharpton, Keith Ellison, Linda Sarsour, Rashida Tlaib, Ilhan Omar—Jewish Americans in the present and befuddled historians of the future aren’t given any other choice but to believe their eyes.
The worst of it though is that the NY Times is not alone, and liberals, conservatives and moderates alike are fairly horrified to see what used to be the greatest, freest, most honest press in recorded history now brought so low as to be held in contempt with the same propagandistic mouthpieces as the USSR’s Pravda or Izvestia. Class warfare, nihilistic self-hatred of country and tradition, mockery of religion and normalcy, hardly disguised anti-Semitism and denigration of Christianity, a regaling of the criminal class and downgrading of the law-abiding, workaday citizenry—that now masquerades as the “news.” The normal and decent, regardless of their faiths, can’t fool themselves any longer the media hasn’t only turned on them but with such a vitriol as to make plain it despises them.”
Like the earlier piece from Johnny Luk of AL Jazeera, David Nabhan has given a stronger definition of Wokeness than any other westerner I've seen, matched in its accuracy only by Johnny Luk's description. Like Luk, Nabhan has also successfully listed the “fruits” of Wokeness in a psychosocial sense. I recommend either or both to anyone who is still confused about what “Wokeness” means.
Regardless, the situation continues to worsen on all fronts.
“The World Food Program reports that nearly everyone in Gaza is in need of food assistance to survive, but the “food systems are collapsing.” Human Rights Watch accused Israel on Monday of using starvation as a weapon of war.”
The Council on American-Islamic Relations (CAIR) has gone so far as to draft a newsletter called “Israeli War Crimes of the Day.” There are enough complaints of this nature currently lodged either with or about the IDF for such a daily segment to go on for months without the need to reach any further back than 10/7 to find evidence of such.
And yet, the anger of Israeli parents is far from illegitimate. While some hostages were released (many of whom were children) almost a hundred remain in captivity. Among those, there are at least thirteen (possibly over twenty) young women. Among those, Noa Argamani has become the face of the movement to get them home, or at least learn the truth of their final fate and bring the perpetrators to justice.
“A hand outstretched, terror etched on her face, screaming as she is carried away on the back of a motorcycle, the roughly 10-second clip became an instant symbol of Israel’s hostage crisis.
But more than two months after Noa Argamani was abducted from the Supernova, or Nova, music festival during the Oct. 7 terrorist attack, she remains a captive in Gaza. Even as other young civilian women were released during a weeklong ceasefire in November, there has been no sign of Argamani.
NBC News has uncovered information indicating she may not have been kidnapped by Hamas, but was instead most likely abducted by a mob of Gazans that swept into Israel hours after the initial attack. That may explain why she was not released during the November cease-fire: Hamas may not be holding her, or even know where she is.
Argamani is among 14 female civilians who have yet to be released by their captors. More than two months after she was taken hostage, friends and family are growing more desperate to know her fate, and why she hasn’t been freed alongside about 100 others.
“When you see someone you love so much and a person that is so close to you in this situation, you just get crazy,” Amir Moadi, 29, a roommate and friend of Argamani’s, said in an interview. “Because there’s nothing you can do.”
While it’s known Hamas terrorists took hostages during the attack, who took Argamani is less clear, according to text messages, phone records, satellite images and human sources, as well as an NBC News analysis of the sun’s position during her abduction. The information indicates that she may not have been seized by Hamas militants at all, and instead may have been taken by another group of men who followed trained Hamas fighters out of the blockaded Palestinian enclave into Israel.
Moadi realized Argamani had been taken from the Nova music festival near Re’im when he saw the video that sent shockwaves around the world. He watched the footage of his close friend being driven away and reaching out toward her boyfriend, Avinatan Or, as their assailants marched him behind her. Israeli officials say that as many as 350 people were killed at the festival.
A second video posted to social media on Oct. 7 showed Argamani, who turned 26 in captivity, sitting on a sofa drinking from a water bottle. Two people with bare feet could be seen walking behind her. It gave some of her friends hope she was OK.
“It’s crazy to say, but … I was thankful that she’s not dead because I saw other videos and I saw what happened to other people,” Moadi said.
For Argamani’s loved ones, efforts to free her feel like a race against time because her mother, Liora, has terminal brain cancer, Moadi said. They are desperate to know why she wasn’t among those exchanged in an extended hostage-prisoner swap before talks between Israel and Hamas collapsed on Dec. 1.”
As I write this, 166 additional Palestinian deaths have been confirmed from the last 48 hours alone, leaving the total count of Palestinian casualties at well over 20,000. Meanwhile, Netanyahu has said that Israel is “paying a heavy price” as over a dozen soldiers have fallen since Friday, bringing the total number of IDF casualties to 154.
https://www.google.com/amp/s/www.bbc.com/news/world-middle-east-67814475.amp
Janwell Mann's Coherent Reality substack will be our final source for this section, as he summarizes everything I've juxtaposed here in a nutshell.
“In Douglas Adams' "Hitchhiker's Guide to the Galaxy," which might as well be based on actual events, the protagonist travels around the universe with a spaceship powered by an Improbability Drive. The great thing about this engine type is that it instantly gets you to far-away places. The downside is that you must handle highly improbable events when you speed up or slow down. You would have to, for example, dodge whales materializing out of the ionosphere, witness your uncle morphing into a piece of melon, or your favorite movie star giving birth to a, say, kitty on your kitchen floor when you're trying to brew coffee - among other unpredictable and absurd nonsense that can represent a hefty price for rapid travel.
Balancing the pros and cons of improbability travel is a matter of personal taste and entirely beside my point.
My point is that the absurdly improbable and the improbably absurd are both materializing on our plane of existence so rapidly right now that it makes me question whether we could be inside the contrail of an Improbability Drive.
The only (much less probable, in fact, downright fantastical) alternative is that we're in the midst of a carefully planned social engineering experiment.
The Director-General of UNESCO, Julian Huxley (Aldous Huxley's brother), envisioned the future of our planet in 1946 with the concept of "world citizenship" and a path to "permanent world peace, evolving human society from tribes to nations, and from national consciousness to one world."1
Huxley and his team, supported by a group of elite families who were pulling a lot of strings, including those of the central banks, saw that achieving this vision would not be easy because most nations, at least then, still had a solid national identity, sovereignty, culture, history, and tradition.”
“In summary, in a strictly hypothetical and improbable sense, the more the fabric of individual identity is dissolved, the easier it would be to build division and hate between groups, belief systems, ethnicities, races, and genders - and permanently lodge people's attention away from what's happening around them, and where they are being led.
Different faiths, spiritual and religious strands, often a source of energy for people, would also have to be mixed and merged before reassembling them in a spiritual soup where everyone sees the same light (of delusion). Nothing like that has ever happened before.
Let's not forget the promise - although extremely improbable - of spreading pandemics, the invisible microbes that represent the ultimate threat to our lives and maximize our fear and obedience factor. The upside? Microbes can be fought only with isolation (leading to further individual and economic destabilization) and modern medical procedures that open the pandora's box for deeper biochemical intrusion.
Thankfully, no sane individual would accept such an intrusion on their medical privacy without researching what is really happening behind the curtains or inside the needles.
As Julian Huxley and his cadre believed, leaving ourselves to the forces of natural evolution will only lead to global disaster, overpopulation, and wars.
We can’t blame them for wanting to build a safe, centralized world and let go of our naive, outdated, primitive customs, nations, and traditions so that a handful of genuinely knowledgeable minds could decide our future and bring about eternal peace.
At the last moment, they realized that such a project would never materialize, being profoundly improbable and impossible - just like the present.”
Call it conspirituality, call it intuitive as opposed to deductive thinking, call it what you will. These assertions are not baseless. They describe the world as it is, complete with the problems that plague it, in a way that has a lot of explanatory power. It may well not be entirely correct (I'm sure there are some errors), but I’ve seen ample evidence that the “bones” of these claims are real, and the powers mentioned in them will soon be able to affect any and every human being on Earth. Pandemics and wars only aid the realization of that goal.
Will it work out in the long run? Time will tell.
Part 5: A Cosmic Utopia
Some say religion is declining in the modern world. Others see the increase of knowledge brought on by modernity as a fulfillment of prophecy in itself. Still others believe we have already crossed into a new age, in a more mystical sense, over a hundred years ago now: the Aquarian Age, brought on by the “World Teacher” (alternatively known as the “Beast”), Allister Crowley. His influence is more pervasive and enduring than most people outside the New Age movement or the heavy metal music genre know.
https://youtu.be/PqGI6HybNho?si=0CSGN2zqZOFr5r_Q
The following describes the connections between this founding father of modern occultism and the founding fathers of various other industries and fields of research that have shaped our world over the decades since.
“ 'There is nothing to match flying over LA by night', said the French philosopher Jean Baudrillard; 'only Hieronymous Bosch's Hell can match the inferno effect.' Yet such is LA's fallen angel charm that it has exercised a narcotic fascination on those seeking to transform themselves.
In the first half of the 20th century, the city acted as a doctrinal battleground: mystic cults sought to transfigure souls, scientists strove to liberate man from earth's atmosphere, attaining a new, literally higher state of being, while in the verdant arena of Hollywood the studios were also seeking to displace human frailty with a mythical order of demi-gods - 'the stars'. It was in this Babel of variegated virtue that a strange, Pynchonian network was formed, linking the Edwardian occultist Aleister Crowley, the brilliant young rocket scientist John Whiteside Parsons and the maverick genius of America's cinematic avant-garde, Kenneth Anger.
If anyone could make himself feel comfortable in hell, you imagine it would be Crowley, aka 'The Great Beast', aka 'The Wickedest Man on Earth', aka '666'. Born in 1875, he was a poet, mountaineer, orientalist and experimenter with drugs. A consummate showman and avid self-promoter (he faked his own death to drum up interest in his first painting exhibition), he was most famous as a practitioner of the occult. Although Crowley traveled to Los Angeles only once, he exerted a considerable influence on the city's inhabitants through his religion of Thelema (Greek for 'will'), the central creed of which was: 'Do what thou wilt'. In Los Angeles this sensual doctrine was adopted and promulgated by the Agape Lodge of the Ordo Templi Orientis (OTO), originally a German occult order related to the Freemasons, which came under Crowley's spell. The intention behind Thelema was to raise man's consciousness to a higher level, specifically through the use of sex and drugs. Indeed hedonistic pleasures were all-important factors in Crowley's 'magick' rituals, allowing the magus to reach the recesses, or rather the outer limits, of his being. His 'sex-magick' was intended as the fuel to power man to a new state of being, out of the cage of his socially conditioned ego. That such a radical creed of excess and chemically altered perception should have caught on in Los Angeles is not surprising; this was, after all, the city in which Aldous Huxley chose to open 'the doors of perception' to internal exploration.
Crowley's teachings accordingly attracted some of the most unusual admirers. Dr Alfred Kinsey, the sexual historian and author of the landmark Sexual Behaviour in the Human Male (1948), was obsessed with both Crowley's erotic writings and his 'sex-magick' practices. In 1955 he visited Crowley's occult abbey on Sicily, where many rituals were enacted. (He took with him on this pilgrimage his admirer, and one of his many subjects, Kenneth Anger). And in the 1960s Dr Timothy Leary, conscious of his debt to Crowley, talked of a similar need to 're-imprint our reality tunnel' through the use of hallucinogens.
Among those whom Crowley's teachings had entranced, none was so thoroughly converted as John Whiteside Parsons, a brilliant young scientist dubbed the 'James Dean of Cal Tech'. Parsons' work at the Jet Propulsion Laboratory in Pasadena had seen him heavily involved in the invention of solid rocket fuel, a breakthrough that would ultimately make space travel possible. Yet while his scientific work made him one of the most respected scientists in his field, his fascination with the occult was the presiding belief in his life.”
“Crowley died in 1947, his last words reputedly being the less than assured 'I am perplexed'. John Parsons died in 1952, at the age of 37, when a mysterious explosion ripped through his home laboratory. Clumsiness and assassination have both been posited as reasons for the blast (Anger suggests the tycoon Howard Hughes had Parsons murdered), but the most intriguing, and Lovecraftian, suggestion proposes that he was vaporized while trying to summon a demonic homunculus from the ether. Kenneth Anger has continued to make his Crowley-inspired films, culminating in the grim pageant of Lucifer Rising (1980). His next project, long in the preparation, is said to be about Crowley himself. As for the alchemical ley lines that once ran through LA, they are now a pale reflection of what they once were. Crowley's teachings have become diluted into a thousand self-help mantras and New Age cults. The Hollywood system of creating demi-gods was destroyed, partly by Anger, whose gaudy books Hollywood Babylon I and II (1958 and 1984) viciously portrayed a phenomenology of broken gods addicted to morphine, booze and sex. But while the space race has long been relegated to history, its promise of new worlds refuses to die.”
https://www.frieze.com/article/strange-angels
That Frieze article was published by a certain George Pendle in 2002. At the turn of the century, the space program was basically treading water and the aerospace industry wasn't really off the ground yet. Both of those things have changed, as the whole world has changed over the last twenty years.
The following article was published by the Guardian's Julian Baggini, who not only admits that atheists and secularists can have faith, but alludes to the forms of “irreligious” faith that have contributed to forming the world into its current shape.
“If “religious” is a slippery concept, “faith” is even greasier. In some of its senses, we certainly do see plenty of faith outside organised religion. If faith is a kind of passionate conviction, for example, then look no further than the zealous breed of atheist who not only personally rejects religion, but also sees it as an offence to human rationality. Like the religious, their core belief becomes the centre of their lives, their moral compass, their blueprint for a better world.
Faith can also provide the godless with a source of salvation that is based more on hope than experience. Reason, for example, is vitally important, but mainly because, like democracy, it is only better than the alternatives. Nothing is more powerful for helping us to understand the world accurately. But when we use reason to try to move from understanding to managing and changing, experience tells us we often go horribly wrong. From central state socialism to failed “scientific” diets via the excesses of industrial agriculture, an over-abundance of faith in the power of rational planning has too often left us in a terrible mess.”
That sort of faith - faith in the power of prediction, enabled by technology - forms the (a)theological core of Syntheism, a new religion introduced by a Swedish futurist in recent years.
“It is two years since Alexander Bard founded a new religion called Syntheism in which he claimed that the “the internet is God”.
Activist, musician and now religious leader – and playing “the nasty judge” on Swedish Pop Idol - Bard now has a new way to spread the word with the publication of his latest book, Syntheism - Creating God in The Internet Age, out this week.
“In Christianity, one of the last things Jesus said to his disciples was ‘I will always be with you’, meaning that the Holy Ghost is the manifestation of God when the believers are together,” says Bard. “The internet is 7 billion people connected together in real time, and if that isn’t the holy spirit then I don’t know what it is.”
In Bard’s analysis of history, where feudalism had Christianity to keep people on the land and capitalism had individualism to keep people consuming, so the internet age is going to have Syntheism to keep people online.
“What we have been lacking up to now is the storytelling. Someone has to do the fucking Immanuel Kant for the new age. So Syntheism is preparing the way for a new elite and I am one of its storytellers. For my friend Julian Assange what Syntheism does is to create a bigger story for WikiLeaks. It is the popular movement that could support something like WikiLeaks eventually.”
If Saint Paul had his vision on the road to Damascus, Bard had his “while spending the night lying next to a beautiful naked actress at Burning Man during which I realised that rather than carry on writing books about the problems the internet was causing I should write about Syntheism.”
Burning Man, the annual week-long festival in Nevada’s Black Rock Desert, embodies the same anarchistic values of the opposition to hierarchical authority and belief in voluntary self-government that are central to the ethos of the internet, says Bard. He describes it as “an experimental temporary utopia that is the world’s first physical manifestation of the internet itself”.
Burning Man, and spin offs including Burning Nest in the UK, show that digital natives under 25 now see “the online world as the real world and the real world as a reflection of the online world,” says Bard.”
This sort of atheology is shared in principle if not in name by many computer scientists and tech moguls in Silicon Valley. Techno-utopianism is a broader, more flexible term than Syntheism, and can be said to be an article of doctrine which spans several interrelated belief systems (much like religious dualism). Syntheism, on the other hand, is specifically Alexander Bard's creation… but given how closely the name of IBM's latest AI creation, Bard, resembles his own, connections are easy to speculate about (though currently impossible to prove).
This places a certain “spiritual” significance upon scientific advances in neurology, especially those which contribute to the rise of neuromorphic AI. Studies of how the minds of various animals work bring “the science” ever closer to producing a full “connectome” mapping the neurological connections of the human brain.
Efforts to that end include experiments on rats:
…and octopi, which are still believed by some scientists to be of an extraterrestrial lineage… though the theory had been widely panned by the scientific community.
Also central to this line of research (for many people) is the belief that “human exceptionism”, or “anthropocentrism” is an obsolete (and therefore, by the standards of progressive morality, “unethical”) moral system. The rise of “non-anthropocentric” alternatives like ecocentrism and biocentrism provides a basis for new religions like Syntheism, and is drawn (originally) from occult traditions like Wicca and Thelema. The ahimsa principle promoted by western vegans is Hindu in origin.
And of course, this drives climate change initiatives in how they seek to counteract gradually rising global temperatures.
“The last time that levels of atmospheric carbon dioxide were as high as they are today, Greenland was free of ice and the savanna and grassland ecosystems where humans evolved didn’t exist yet.
That’s the conclusion of a study published in Science Friday, which researchers say compiles “the most reliable data available to date” on atmospheric carbon dioxide levels over the last 66 million years.
“It really brings it home to us that what we are doing is very, very unusual in Earth’s history,” lead author Baerbel Hoenisch of the Columbia Climate School’s Lamont-Doherty Earth Observatory told Agence France-Presse.
By burning fossil fuels and clearing natural carbon sinks like forests, industrial capitalism has raised global carbon dioxide levels to 419 parts per million (ppm) today from around 280 ppm at the beginning of the industrial revolution.
“Rising atmospheric CO2 is the most obvious and startling expressions of our impact on the global environment,” study corresponding author and University of Utah geologist Gabe Bowen wrote on social media. “The concentration has risen by ~50% in the past 100 years. Every year is now marked by the highest CO2 levels *ever observed* by humans!” ”
Many on the Right either don't see climate change as a global emergency or deny it outright. Personally, I don't see any viable solution other than the one that seems to be preferred by leftists, which is climate lockdowns and totalitarianism.
If you think the concepts I've listed here are nonsensical or disparate and unconnected outside of my own mind, think again. I'm no analyst, but analysts have looked at these same matters and even come up with an acronym for them.
“The recent ouster of Sam Altman from OpenAI, followed by his reinstatement within a week, triggered a flurry of speculation. What led OpenAI’s board of directors to fire the face of artificial intelligence, one of the most popular figures in Silicon Valley?
Some believe that Altman’s dismissal was the culmination of a fight between — as the media have framed it — “effective altruists” and so-called “accelerationists.” Effective altruists, also known as “EAs,” want to slow the march toward artificial general intelligence, or AGI, while accelerationists want to push the pedal to the metal.
Altman, according to this narrative, leans accelerationist, while board members like Ilya Sutskever, Helen Toner and Tasha McCauley align with the EA approach. In the end, the accelerationists won, as Altman returned to his position as CEO and Sutskever, Toner and McCauley were removed from the board. Some EAs now think that this power struggle may be “as bad for EA’s reputation as [the collapse of] FTX,” and the subsequent imprisonment of its former CEO and co-founder, Sam Bankman-Fried, arguably the most prominent EA in the world alongside his moral adviser, philosopher William MacAskill.
What exactly is “accelerationism”? How does it contrast with EA, and does it connect with what Dr. Timnit Gebru and I call the “TESCREAL bundle” of ideologies — a cluster of techno-futuristic worldviews that have become immensely influential within Silicon Valley and major governing institutions? With a few exceptions, there’s very little in the popular media about the accelerationist movement, which received a burst of momentum with Altman’s return to OpenAI.
While there are important differences between accelerationism and EA — which accelerationists play up in blog posts and interviews — their respective visions of the future are more or less identical. If you imagine a five-by-five-foot map of different ideologies, accelerationism and EA would be located about an inch apart. Taking five steps back, they’d appear to be in the same location. Meanwhile, both would be about three feet from the field of AI ethics, which focuses on the real-world harms caused by AI —f rom worker exploitation and algorithmic bias to the spread of disinformation and environmental impacts of AI systems.
To understand the topography of this map, let’s put e/acc and EA under a microscope to see how they diverge and where they overlap.
The differences between accelerationism and EA fall into two areas. The most significant concerns their respective assessment of the “existential risks” posed by AGI. Accelerationists are techno-optimistic: They believe the risks are very low or nonexistent.
To quote one of the thought leaders of contemporary accelerationism, Guillaume Verdon — better known by his spooneristic pseudonym “Beff Jezos” — an existential catastrophe from AGI has a “zero or near zero probability” of happening. Another leading accelerationist, tech billionaire Marc Andreessen, declares in one of his manifestos that he is “here to bring the good news: AI will not destroy the world, and in fact may save it.”
Many EAs tend to be much more techno-cautious, at least when it comes to certain hypothetical technologies like AGI. While the popular media and accelerationists alike often refer to this opposing group as “EA,” a more accurate label would be “longtermism.” The reason is that EA is a broad tent, and includes many people who aren’t that interested in AGI, existential risks and similar matters. Traditionally, EAs have distinguished between three main cause areas within the movement: alleviating global poverty, improving animal welfare and longtermism. When EA formed around 2009, it was initially focused entirely on global poverty. But over time, most if its leading figures and grantmaking organizations have shifted toward more longtermist issues, such as mitigating the supposed existential risks of AGI.
The reasoning went like this: The fundamental aim of all EAs is to do the most “good” possible in the world. Alleviating global poverty and ending factory farms seem like obvious ways to do this. But then EAs realized that, if humanity survives for the next century or so, we’ll probably spread into space, and the universe is huge and will remain habitable for trillions of years. Consequently, if one takes this grand, cosmic view of our place in space and time, it seems obvious that most people who could exist will exist in the far future — after we’ve spread beyond Earth and colonized the accessible universe. It follows that if you want to positively influence the greatest number of people, and if most people live in the far future, then you should focus on how your actions today can help them not only live good lives, but come into existence in the first place.
The connection with AGI is that EA longtermists — or “longtermists” for short — believe that it could be essential for colonizing space and creating unfathomable numbers of future people (most of whom, incidentally, would be digital people living in vast computer simulations). This is the upside if we get AGI right, meaning that we build an AGI that’s “value-aligned” with this “ethical” vision of the future. The downside is that if we get AGI wrong, it will almost certainly destroy humanity and, along with us, this “vast and glorious” future among the heavens, in the words of longtermist Toby Ord, co-founder of the EA movement.
Everything, therefore, depends on how we build AGI — the entire future of humanity, spanning trillions and trillions of years into the future, spread across galaxies, hangs in the balance. And given that we’re on the cusp of building AGI, according to many longtermists, this means that we’re in an absolutely critical moment not just in human history, but in cosmic history. What we do in the next few years, or perhaps the next few decades, with advanced AI could determine whether the universe becomes filled with conscious beings or remains a lifeless waste of space and energy.
This is partly why many longtermists are techno-cautious: They want to be very, very sure that the AGI we build in the next few years or decades opens up the doors to a heavenly techno-utopia rather than turning on its creators and annihilating us, thereby ruining everything. Over the past 20 years, longtermists (though the word itself wasn’t coined until 2017) have thus explored and developed various arguments for how and why a poorly designed AGI could kill us.
Accelerationists claim these arguments are unscientific and overly pessimistic. They denigrate proponents of caution as “doomers” and “decels” (short for “decelerationist,” and presumably a play on the word “incel.”) From the accelerationist perspective, there’s nothing worse than a “decel,” which can give the false impression that accelerationists and longtermists are miles apart on the ideological map.
So, longtermists are techno-cautious, while accelerationists are techno-optimistic, in the sense that they don’t see AGI as existentially risky. And where longtermists see government intervention as playing an essential role in acting cautiously, accelerationists think that government intervention will only make things worse, while simultaneously slowing down the march of progress toward a utopian future world.”
That article from Émile P. Torres goes into a lot of detail on this worldview, and summarizes in a far more succinct way than I have been able to do here how far-reaching and grandiose the visions based in it are. As I have said elsewhere, “cosmic utopia” is about as religious a phrase as I can imagine coming from a secular movement… and it leaves me wondering whether “nihilism” is a proper term for a religious doctrine that directs humanity to do everything in its power to hasten the demise of the universe.
It all honestly goes to remind me more than ever of the three primary “gods” (below God himself; it was a deistic setting, as I recall) that made up the main cosmogeny of the old World of Darkness tabletop RPG setting: the Weaver, the Wyld, and the Wyrm. Nature, technology, and sorcery… the three primary “teleological” forces acting on the world of the setting.
https://whitewolf.fandom.com/wiki/Triat
No doubt motivated by similar sets of beliefs, the global leaders of BAE systems are diligently working to produce “rad-hard” technology. Such tech will be a godsend if another Carrington Event (where a coronal mass ejection from the sun strikes Earth's magnetosphere head-on) occurs, which would result in an EMP that would wreck most currently functioning electronic devices on this world. Another primary objective is space travel, as these are the sort of electronic housings that could theoretically survive voyages to and fro in the Solar System.
“In a historic collaboration announced at the ESA Space Summit in Seville, the European Space Agency (ESA), Airbus Defence and Space, and Voyager Space have joined forces through a trilateral Memorandum of Understanding (MoU) to propel the Starlab space station into the post-International Space Station (ISS) era. This strategic agreement marks a pivotal moment in the evolution of space exploration and underscores their shared commitment to advancing scientific discovery and technological innovation.
The MoU outlines a shared vision to explore and harness the potential of collaborative efforts in low Earth orbit (LEO) destinations beyond the ISS. The immediate focus of this partnership is to pave the way for sustainable space access opportunities for Europe through the development of the Starlab space station.
This collaboration envisions:
Access to Starlab: The ESA and its member states will have access to the Starlab space station for astronaut missions, sustained long-term research endeavors, and the cultivation of commercial enterprises. This positions Starlab as a pivotal hub for a myriad of space-based activities.
Research Collaboration: Both parties will contribute to cutting-edge research projects in upcoming missions. Leveraging European expertise in advanced robotics, automation, artificial intelligence, and focusing on priority scientific domains such as health and life sciences, this collaboration aims to push the boundaries of scientific exploration.
Comprehensive Ecosystem: The partners aspire to establish a holistic ecosystem encompassing the Starlab space station as a destination in low Earth orbit. Additionally, there is consideration for a potential European transportation system (cargo and crew) developed by the ESA. The integration of standardized interfaces is envisioned to foster an open-access policy, promoting inclusivity in the realm of space exploration.
This agreement signifies ESA's commitment to facilitating a seamless transition from the ISS era towards sustained human and robotic exploration in low Earth orbit beyond 2030, emphasizing the incorporation of commercial services into the next phase of space exploration.”
That next phase is closer than most would believe. Before I wrap up, here's a quick list of all the new innovations in space travel coming down the pipeline. First up, we have the Dream Chaser-class shuttle, marketed as a ‘space plane’ by its designers, a Colorado-based aerospace company by the name of Sierra Space. It's not a true “rocket plane”, which would make it capable of achieving escape velocity with its own engines. The Dream Chaser isn't quite that OP, but it's certainly a step in that direction.
The Dream Chaser prototype already exists, so it's already out of the conceptual phase. True rocket planes are a major engineering problem, mainly because such a craft would require two different kinds of engine: conventional jet propulsion for use in-atmosphere, and a true rocket motor for use in outer space. This, in turn, would require an onboard supply of both jet and rocket fuel, which are not the same thing though both are equally volatile. Such a craft would be complex indeed, and the more complex the design, the more points of potential failure the machine contains. Fortunately for the “cosmic utopia” visionaries, rocket planes aren't expected to revolutionize space travel anytime soon. Most have hung their hopes on a very different hook.
As Verdon and other accelerationists have mentioned, “climbing the Kardashev incline” is something humanity must do if cosmic utopia is to be achieved. One of the main criteria to do this is to gain the capacity to process all this planet's energy, and one of the main concepts associated with the Kardashev scale is superstructures.
These are a prevalent idea in science fiction as well, which is where much of these techno-utopian ideals come from in the first place. The Death Star from Star Wars is a prominent example of a fictional superstructure. The “space halos” of the Halo franchise are another, and Star Trek and other well-known sci fi franchises feature gargantuan space stations and other wonders of cosmic artifice.
All of the above are beyond humanity's current ability to create as long as lifting building materials to space in any meaningful quantity remains so prohibitively expensive. There would be a possibility of harvesting material from NEO's (near earth objects) if the infrastructure to do so existed in orbit, but it doesn't.
The only possible “real’ solution under currently understood physics is a (slightly) smaller and more realistic superstructure called a space elevator.
The hollow moon theory is a decades-old conspiracy regarding secret alien bases on the moon, some versions of which claim that the moon itself is an ancient spaceship (one specific version even ascribes interstellar origins to it, saying it was built somewhere in the Orion nebula. Allegedly, the outer layer of lunar dust conceals something not all that different from an old, run-down Death Star.
Regardless of whether there's presently any truth to this theory or not, the plans these techno-utopians have in store for the moon would see it transformed into something very similar, piece by piece.
It's basically a given that this “cosmic utopia” would ultimately place the earth itself at the center of an interstellar civilization. It's basically a given that the “great work” will begin with the moon and Mars. Venus is a more dubious proposal, but it's still on the list.
As scientific research reveals more and more of the secrets of nature, exceptions are being found to rules that were long seen as universal. One of these is a creature that lacks mitochondria in its cells, and therefore does not breathe air.
“But H. salminicola — a cnidarian animal related to jellyfish and coral — don’t have mitochondria, and therefore can’t perform aerobic respiration. Lead study author Dorothée Huchon discovered this as she was sequencing mitochondria across Myxozoa (a class of parasites).
“My goal was to assemble mitochondrial genome to study its evolution in Myxozoa and… Oops, I found one without a genome,” she told Vice. “I first thought that the lack of mitochondrial genome among the DNA sequence was the result of a bug in genome analyses. But then I realized that it has lost not just the mitochondrial genome but the whole set of protein genes that interact with the mitochondrial genome and all the majority of genes involved in respiration.”
Discovering stable wormholes, much less evidence of alien civilizations, would be a convenient “leg up” for these narratives. The famous astronomer Frank Drake (author of the Drake equation) referred to the prospect of alien contact (in the form of sending messages, not an invasion) as an “owner's manual to the universe.”
https://futurism.com/the-byte/astrophysicist-wormholes-alien-civilization
On the other hand, even without ET intervention, a lead researcher at the ESA (European Space Agency) has predicted that the first human trials for cryostasis (suspended animation for long-term space travel) will begin in the next ten years.
Statistical mechanics is a far-flung branch of physics that aims to establish a direct link between the mechanical (quantum) laws and classical thermodynamics. In other words, it's one of many ongoing attempts to establish a theory of everything, placing humankind much closer to solving the riddle of the cosmos.
In answer to the question, ‘Are there observable phenomena where the Tsallis entropy provides new understanding?’, Dr Alberto Robledo of Instituto de Física, Universidad Nacional Autónoma de México (UNAM) gave the following answer:
“Within condensed matter physics: the formation of glasses, the transformation of a conductor into an insulator, and critical point fluctuations. Regarding complex systems problems: the phenomenon of self-organisation and the development of diversity (biological or social, like languages). Also the comprehension of empirical laws, like those relating to the universality of ranked data or the metabolism of plants and animals.”
Theoretically, these natural processes are parts of the natural order where an underlying mechanism of entropy itself can be observed and mathematically modeled. Such research may help future cosmic travelers understand more exotic and dangerous natural phenomena, far from this world's friendly skies.
Meanwhile, strides are being made towards the development of such a theory in the field of relativistic quantum mechanics. The ‘Poveda-Poirier-Grave de Peralta’ (PPGP) equations can express the relativistic behaviors of quantum particles in far simpler terms, making them more accessible to students who aren't studying advanced quantum mechanics. This, along with the advent of AI and deep learning systems, may serve to catapult the field forward in the next few years, providing answers to several long-standing cosmological enigmas.
“Why isn’t the universe boring? It could be. The number of subatomic particles in the universe is about 1080, a 1 with 80 zeros after it. Scatter those particles at random, and the universe would just be a monotonous desert of sameness, a thin vacuum without any structure much larger than an atom for billions of light-years in any direction. Instead, we have a universe filled with stars and planets, canyons and waterfalls, pine trees and people. There is an exuberant plenty to nature. But why is any of this stuff here?
Cosmologists have pieced together an answer to this question over the past half-century, using a variety of increasingly complex experiments and observational instruments. But as is nearly always the case in science, that answer is incomplete. Now, with new experiments of breathtaking sensitivity, physicists are hoping to spot a never-before-seen event that could explain one of the great remaining mysteries in that story: why there was any matter around to form complicated things in the first place.
The interestingness of the world around us is all the more puzzling when you look at the universe on the largest scales. You find structured clumpiness for a while. Stars form galaxies, galaxies form galaxy clusters, and those clusters form superclusters and filaments and walls around great cosmic voids nearly empty of matter.
But when you zoom out even further, looking at chunks of the universe more than 300 million light-years wide, all that structure fades away. Past this point, the light from all the stars in the cosmos merges into an indistinct blur, and the universe does indeed look quite boringly similar in all directions, with no features or differences of note anywhere. Cosmologists call this the “end of greatness.”
This tedious cosmic landscape exists because the universe really was boring once. Shortly after the Big Bang, and for hundreds of thousands of years after that, it was relentlessly dull. All that existed was a thick red-hot haze of particles, stretching for trillions upon trillions of kilometers and filling every point in the universe almost evenly, with minuscule differences in the density of matter between one spot and another.
But as the universe expanded and cooled, gravity amplified those tiny differences. Slowly, over the following millions and billions of years, the places in the universe with slightly more stuff attracted even more stuff. And that’s where we came from—the profusion of things in the universe today eventually arose as more and more material accumulated, making those slightly over-dense regions into radically complicated places packed with enough matter to form stars, galaxies, and us. On the very largest scales, boredom still reigns, as it has since the beginning of time. But down here in the dirt, there’s ample variety.
This story still has some holes. For one thing, it is not clear where the matter came from in the first place. Particle physics demands that anything that creates matter must also create an equal amount of antimatter, carefully conserving the balance between the two. Every kind of matter particle has an antimatter twin that behaves like matter in nearly every way. But when a matter particle comes into contact with its antimatter counterpart, they annihilate each other, disappearing and leaving behind nothing but radiation.
That’s exactly what happened right after the Big Bang. Matter and antimatter annihilated, leaving our universe aglow with radiation—and a small amount of leftover matter, which had slightly exceeded the amount of antimatter at the start. This tiny mismatch made the difference between the universe we have today and an eternity of tedium, and we don’t know why it happened. “Somehow there was this little imbalance and it turned into everything—namely, us. I really care about us,” says Lindley Winslow, an experimental particle physicist at MIT. “We have a lot of questions about the universe and how it evolved. But this is a pretty basic kindergarten sort of question of, okay, why are we here?” ”
The fact that so many things had to “go right” for this world to exist in the first place, not to mention how certain underlying mechanisms of statistical mechanics manifest on so many levels of reality, including human social behavior, is a profound mystery to artists and physicists alike.
“In the first episode of this six-part Working Scientist podcast series, Julie Gould explores the history of science and art, and asks researchers and artists to define what the two terms mean to them.
Like science, art is a way of asking questions about the world, says Jessica Bradford, head of collections and principal curator at the Science Museum in London. But unlike art, science is about interrogating the world in a way that is hopefully repeatable, adds UK-based artist Luke Jerram, who creates sculptures, installations and live artworks around the world.
Ljiljana Fruk, a bionanotechnology researcher at the University of Cambridge, UK, says artists can be more playful and work faster, whereas scientists need to repeatedly back up their work by data, a more time-consuming exercise. They are joined by Arthur I. Miller, a physicist who launched the UK’s first undergraduate degree in history and philosophy of science in 1993, and Nadav Drukker, a ceramic artist and theoretical physicist at King’s College London.”
This can seem very counterintuitive when one looks over the body of knowledge modern astronomy has accumulated. After all, even among these great star clusters we call galaxies, the universe seems like a very chaotic place by and large.
This is what makes the rare “pristine” location such a unique find.
The exoplanet database has grown vast since the first extrasolar planet was discovered in the 1990's. Nowadays, brand new interns are finding new worlds.
“About three days into my internship, I saw a signal from a system called TOI 1338b. At first, I thought it was a stellar eclipse, but the timing was wrong. It turned out to be a planet.”
-17-year-old Wolf Cukier, a NASA intern from New York
However, astronomical surveys have revealed many more stellar realms undergoing cosmic cataclysms than “pristine” systems. The vast majority of stars in the local universe are red dwarves with small, close, Tindall locked planets. Of those stars, the majority are also unstable.
Many cosmological mysteries await explorers to go solve them, such as the true shape of the Milky Way galaxy. For decades, scientists assumed that spiral galaxies were flat and symmetrical discs. However, recent observations indicate that our galaxy is warped, both in the horizontal and the vertical.
https://www.physics-astronomy.com/our-galaxy-is-warped-and-scientists-have-no-idea-why/
In fact, much like how Sol, our sun, is part of a cosmic minority (yellow main sequence stars), it seems that the Milky Way's spiral shape is actually a rare galactic configuration.
The first direct observation of gravitational waves a few years ago was made possible by a pair of giant black holes colliding. As far away from our neck of the cosmic woods as it was, this event allowed astronomers to observe the very fabric of spacetime to be distorted by a “ringing” series of gravitational shock waves that will go on for light years, striking many planetary systems before its kinetic energy is finally spent.
https://www.space.com/black-hole-collisions-spacetime-ring-non-linear-effects
Overall, though, a new picture of the cosmos is emerging… one in which dark matter plays the most visible roles at the very largest of scales.
“Anew map derived with the help of artificial intelligence reveals previously unknown “bridges” linking galaxies in the local universe. The bridges are in the form of filamentary structures. The scientists hope their map, published along with their paper in theAstrophysical Journal, can provide fresh insights into dark matter and the history of our universe.
While dark matter is an accepted notion, thought to make up 80 percent of all the matter in the universe, it has been hard to find. Scientists have, however, inferred much about the existence and behavior of dark matter by observing its gravitational influence on other space objects.
Cosmologists believe that dark matter serves as the filamentary skeleton of the cosmic web, which in turn, makes up the large-scale structure of the universe that partially controls the motion of galaxies and other cosmic systems.”
We return once more to ScienceStyled, this time to hear more on the mystery of dark matter from Atlas himself. The writer explains more eloquently than I can why such a distant, conceptual thing as dark matter is so important to cosmologists. They understand far more about what dark matter does than what it is.
“Greetings, ye valiant seekers of wisdom, those who dare tread upon the winding trails of knowledge, where even a titan such as I findth oneself in a quagmire of perplexity! For, lo and behold, whilst I stand, an eternal sentinel, shouldering the hefty dome of the heavens – a task, mind you, not for the faint-hearted nor for those who dread a bit of upper body exercise – I have stumbled, quite inadvertently, upon a conundrum that makes the burden of the cosmos feel like a feather from the wing of Icarus. Yes, my dear mortals, I speak of none other than the shadowy, the elusive, the ever-so-baffling dark matter, and its most peculiar twirl with the universe’s even distribution.
Marvel, as I did, at the revelations of those stargazing sorcerers from the University of Toronto, who have, with their telescopes and parchment scribblings, uncovered a new link most mysterious between dark matter and our universe’s clumpiness. How laughably ironic it seems that I, bearer of the firmament, find myself a novice pupil in the school of these cosmic complexities!
Now, venture forth with me, as I attempt, in my own cumbersome way, to unravel this Gordian knot, spun by none other than the Fates themselves. Dark matter, an unseen force, as impalpable as the whispers of the gods, yet as potent as the thunderbolt of Zeus, intertwines with the fabric of our universe in ways most bizarre. Unlike the luminous orbs and fiery chariots that adorn my everlasting burden – the stars and galaxies, that is – dark matter does not frolic in the light. Nay, it lurks in the shadows, unseen, unknown, yet ever present.
Imagine a great feast, akin to those in the halls of Olympus. The delectable dishes and libations – these are the stars, planets, and nebulae, visible and opulent in their splendor. But lo! What of the unseen hands that prepare the feast, the toils of the invisible kitchen? Such is the role of dark matter: unseen, yet essential, shaping the very structure of our universe, as a master sculptor shapes the marble.
Recent studies have illuminated – oh, what a droll term to use for such a shadowy subject – that this mysterious dark matter is not strewn haphazardly across the cosmos. Nay, it is spread in a manner most uneven, clumping together like the proud Olympians in their cliques, shunning the empty vastness like the void of chaos. This clumpiness, my inquisitive companions, is what confounds us: why does dark matter, so aloof and reclusive, choose to congregate thus? Why not spread evenly like the golden rays of Helios?
Perchance, these congregations of dark matter tug at the fabric of the universe itself, orchestrating a cosmic tug-of-war. They influence the way galaxies spin and swirl, dictating the show of the heavens with a conductor’s baton made of unseen matter.
How wondrously maddening!
Yet, fear not, for the sages and scholars – modern-day Oracles of Delphi – are ever probing, ever questioning. They use their astrolabes and algorithms to pierce the veil of ignorance, bringing to light the secrets of these cosmic gatherings. Indeed, through their laborious calculations and observations, they surmise that the clumpiness of dark matter could unlock mysteries as profound as the creation of the universe itself, and perhaps even the ultimate fate that awaits it.
So, as I continue my thankless, unending task, holding aloft the starry firmament, take a moment to ponder the invisible forces at play in the great void. Just as Hercules once shouldered the heavens in my stead, so too must we endeavor to shoulder the weight of understanding, to bear the knowledge of dark matter and its mysterious ways. For in understanding, we find the key to the universe – a key as mighty as the one that unlocks the gates of Olympus!”
“What, you might ask, does this have to do with the clumpiness of our universe, a matter upon which my gaze eternally falls? Indeed, it is a question most profound! Recent studies suggest these axions could be a component of dark matter, that unseen substance which, like the threads of fate, weaves through the cosmos, influencing the structure and motion of galaxies. Should these axions be detected, their phantasmal touch might well explain the lumpy, clustered nature of galaxies, pulling them into congregations as if by some ethereal melody unheard by mortal ears.
In this pursuit, the role of axions in our universe’s story is like discovering a new character in the epic poem of creation, one whose silent footsteps have hitherto left but the faintest imprint upon the sands of time. Their presence, or lack thereof, may unravel or reweave the narrative of dark matter and our understanding of the cosmic structure.”
That was a very poetic description. This one is more technical, in the language modern astronomers actually use.
“When investigating the universe, astronomers sometimes work with what's known as the S8 parameter. This parameter basically characterizes how "lumpy," or strongly clustered, all the matter in our universe is, and can be measured precisely with what are known as low-redshift observations. Astronomers use redshift to measure how far an object is from Earth, and low-redshift studies like "weak gravitational lensing surveys" can illuminate processes unfolding in the distant, and therefore older, universe.
But S8's value can also be predicted using the standard model of cosmology; scientists can essentially tune the model to match known properties of the cosmic microwave background (CMB), which is the radiation leftover from the Big Bang, and calculate the lumpiness of matter from there.
So, here's the thing.
Those CMB experiments find a higher S8 value than the weak gravitational lensing surveys. And cosmologists don't know why — they call this discrepancy the S8 tension.
In fact, S8 tension is a brewing crisis in cosmology slightly different from its famous cousin: Hubble tension, which refers to the inconsistencies scientists face in pinning down the rate of expansion of the universe.
The reason it's a big deal that the team's new simulation doesn't offer an answer to S8 tension is, unlike previous simulations that only considered the effects of dark matter on an evolving universe, the latest work takes into account the effects of ordinary matter too. In contrast to dark matter, ordinary matter is governed by gravity as well as pressure from gas across the universe. For example, galactic winds driven by supernova explosions and actively accreting supermassive black holes are crucial processes that redistribute ordinary matter by blowing its particles out into intergalactic space.
However, even the new work's consideration of ordinary matter as well as some of the most extreme galactic winds was not sufficient to explain the weak clumping of matter observed in the present-day universe.
"Here I am at a loss," Schaye told Space.com. "An exciting possibility is that the tension is pointing to shortcomings in the standard model of cosmology, or even the standard model of physics."
The mystery is profound enough to make some scientists posit a second big bang.
https://futurism.com/second-dark-big-bang
There's little reason to doubt that the recently discovered Amaterasu particle, so named by Associate Professor Toshihiro Fujii of the Graduate School of Science and Nambu Yoichiro Institute of Theoretical and Experimental Physics at Osaka Metropolitan University in Japan, comes from that realm of undiscovered “ghostly” particles and faint footprints upon the Sands of Time. Its discovery was quite accidental and serendipitous, and most intriguingly, it seems to have truly come out of the void. No high-energy objects capable of producing such a high-energy pulse have been identified in the part of the universe from which it came.
Human beings remain as fascinated by the sky as ever. Such a cosmic ray would probably have come and gone unobserved in ancient times, though not because no one was being watchful. On the contrary, evidence shows that humans have been paying close attention to the sky for at least eight thousand years, and spent most of that time under the assumption that this wobbling little world was the center of everything.
Big Think's Ethan Siegel explores the question of the physical center of the universe, and uses the metaphor of leavened dough rising (with raisins representing galaxies) rather than the traditional understanding of the big bang as the cosmic explosion it sounds like. While the big bang is being reduced to a major cosmic event rather than the beginning of everything, the leavening metaphor still works to explain the ongoing process of cosmic expansion.
“So how do we know how big this “ball of dough” is, where we are located within it, and where its center is?
This would only be an answerable question if we could see beyond the edge of the “dough,” which we cannot. In fact, to the extreme limits of the part of the Universe that we can observe, the Universe is still perfectly uniform to within that same 1-part-in-30,000, everywhere. Our Big Bang, which occurred 13.8 billion years ago, means that we can see out to a maximum of about ~46 billion light-years in all directions, and even at that distant limit, it’s still remarkably uniform. This places no constraints on:
how large the “ball of dough” that represents our Universe can be,
how large the unobservable Universe beyond our visibility limit is,
what the topology and connectedness of the unobservable Universe is,
and what the allowable “shapes” for the limits of our Universe are,
where the last includes the sub-questions of whether our Universe even has a center (or not), whether it’s finite (or not), and what our location is with respect to any larger structure the Universe may have. All we can conclude is that the Universe appears perfectly consistent with general relativity, and that, just like any individual raisin within the dough that couldn’t see beyond the edge of the dough itself, any observer could lay equal claim to the obvious (but incorrect) conclusion you’d draw if you saw everything moving away from you, “I’m at, or very near to, the actual, exact center.”
Only, it’s not correct to say we’re at “the center” at all. The only thing that’s privileged about our location in space is that the objects we see nearby are the oldest, most evolved objects we can see today, with the more distant objects being younger. The expansion rate nearby is lower, at present, than the expansion rate we see at greater distances. And the light from the closest objects is less redshifted, and their shifts are less dominated by the cosmological component of redshift, than the more distant objects.
That’s because the objects that exist all throughout the Universe can send no signals that travel faster than light, and that the light we’re observing from them, today, corresponds to the light that’s arriving right now, but must have been emitted some time ago. When we look back through space, we’re also looking back through time, seeing objects as they were in the past,when they were younger and closer (in time) to the Big Bang,when the Universe was hotter, denser, and expanding more rapidly,and, in order for that light to arrive at our eyes, it had to get stretched to longer wavelengths over the entirety of its journey.
There is, however, one thing we can look at if we wanted to know where, from our perspective, all directions truly appeared as perfectly uniform as possible: the cosmic microwave background, which itself is the leftover radiation from the Big Bang.
At all locations in space, we see a uniform bath of radiation at precisely 2.7255 K. There are variations in that temperature depending on which direction we look on the order of a few tens to perhaps a few hundred microkelvin: corresponding to those 1-part-in-30,000 imperfections. But we also see that one direction looks a little bit hotter than the opposite direction: what we observe as a dipole in the cosmic microwave background radiation.
What could cause this dipole, which is actually quite large: about ±3.4 millikelvin, or about 1-part-in-800?
The simplest explanation is, going all the way back to the beginning of our discussion, our actual motion through the Universe. There actually is a rest frame to the Universe, if you’re willing to consider, “At this location, I must be moving at this particular speed so that the background of radiation I see is actually uniform.” We’re close to the right speed for our location, but we’re a little bit off: this dipole anisotropy corresponds to a speed, or a peculiar velocity, of about 368 ± 2 km/s. If we either “boosted” ourselves by that precise speed, or kept our current motion but moved our position to be about 17 million light-years away, we’d actually appear to be at a point that was indistinguishable from a naive definition of the Universe’s center: at rest with respect to the overall, observed cosmological expansion.
That’s remarkably close by! After all, we can see for some ~46.1 billion light-years in all directions, and 17 million light years is only 0.037% of the radius-of-the-Universe away from us. But the more sober truth is not that we’re near the center, but that any observer in any galaxy would conclude that they were at (or very near) the center as well. No matter where in the Universe you’re located, you’ll find yourself existing at this particular moment in time: a certain, finite amount of time after the Big Bang. Everything that you see appears as it was when the light from it was emitted, with the arriving light being shifted by both the relative motions of what you’re observing with respect to you and also the expansion of the Universe.
Depending on where you lived, you might see a dipole in your cosmic microwave background corresponding to a motion of hundreds or even thousands of km/s in a particular direction, but once you accounted for that piece of the puzzle, you’d have a Universe that looked just like it does from our perspective: uniform, on the largest scales, in all directions.
The Universe is centered on us in the sense that the amount of time that’s passed since the Big Bang, and the distances that we can observe out to, are finite. The part of the Universe we can access is likely only a small component of what actually exists out there. The Universe could be large, it could loop back on itself, or it could be infinite; we do not know. What we are certain of is that the Universe is expanding, the radiation traveling through it is getting stretched to longer wavelengths, it’s getting less dense, and that more distant objects appear as they were in the past. It’s a profound question to ask where the center of the Universe is, but the actual answer — that there is no center — is perhaps the most profound conclusion of all.”
https://bigthink.com/starts-with-a-bang/true-center-universe/
As vast and dynamic as the three-dimensional, “classical” universe is, and with all the volumes it speaks of the pan-dimensional intelligence that brought it about (in my mind, at least), all that we see in the sky is the equivalent of a lit stage with props and actors. Behind the curtain lies the quantum universe, with all the ropes, scaffolding, and light sources that make the production possible. Our own world is, of course, part of that stage, and just as connected to “Quantumania” as everything around us.
https://youtu.be/5WfTEZJnv_8?si=X1uLUAA51S2ssI8k
Yet another playful lecture from ScienceStyled has Alice from Wonderland breaking the quantum realm down into “mad” metaphors.
“In the quantum realm, a tea party unlike any other takes place – a mad gathering where particles like electrons and photons exchange not only gossip but their very hats and properties! Now, I must admit, at first this sounded as confounding to me as trying to play croquet with a flamingo. However, in the peculiar world of quantum mechanics, these strange occurrences are not only common but essential in understanding the nature of things.
Imagine, if you will, electrons and photons as guests at a tea party. But this is no ordinary gathering. Here, the guests are indecisive, constantly swapping hats, each hat symbolizing a different property like speed, position, or spin. These tiny particles, unlike the guests at my own tea parties, can choose to be in multiple states at once – a concept known as superposition. Picture a teacup simultaneously on the table and spinning in the air, deciding its position only when you try to take a sip!”
Recent developments in the field of scientific cosmology (which transcends astronomy and particle science through quantum physics) have shaken various branches of science, displacing ideas previously taken as axiomatic truths (the big bang chief among them). At the same time, advances have been made in understanding both the microscopic and subatomic worlds, but biologists still continue to theorize about what sort of natural mechanism could have brought complex eukaryotic cells with mitochondria into being on this planet in the first place.
From that descends the enigma of inherited memories and the idea of biological reincarnation. The idea comes mostly from the work of Dr Jim B. Tucker, an associate psychiatry professor at the UVA Medical Center’s Division of Perceptual Studies, and that of his mentor and predecessor in this field, a certain (deceased) Dr Ian Stevenson.
“Raised as a Southern Baptist in North Carolina, Tucker has weighed other, more earthly, explanations to the phenomenon.
He’s looked at fraud, perhaps for financial gain or fame. But most claims usually don’t net a movie deal, and many of the families Tucker’s met, particularly in the West, are reluctant to speak publicly about their child’s unusual behavior. Tucker has also considered simple childhood fantasy play, but that doesn’t explain how the details children offer can sometimes lead back to a particular individual. “It defies logic that it would just be a coincidence,” he says.
Faulty memories of witnesses are likely present in many cases, Tucker says, but there are dozens of instances where people made notes of what the children were saying almost from the beginning.
“None of those possibilities would also explain some of the other patterns, like the intense emotional attachment many children have to these memories, as Ryan exhibited,” Tucker says.
Tucker believes the relatively small number of claims he and Stevenson collected during the last five decades, especially from America, is partly because parents may dismiss or misunderstand what their children are telling them. “If children get a message that they aren’t being listened to, they will stop talking,” Tucker says. “They see they aren’t supported. Most kids aim to please their parents.”
How exactly the consciousness, or at least memories, of one person might transfer to another is obviously a mystery, but Tucker believes the answers might be found within the foundations of quantum physics.
Scientists have long known that matter like electrons and protons produces events only when observed.
A simplified example: Take light and shine it through a screen with two slits cut in it. Behind the screen, put a photographic plate that records the light. When the light is unobserved as it travels, the plate shows it went through both slits. But what happens when the light is observed? The plate shows the particles go through just one of the slits. The light’s behavior changes, and the only difference is that it is being observed.There’s plenty of debate on what that might mean. But Tucker, like Max Planck, the father of quantum physics, believes that discovery shows that the physical world is affected by, and even derived from the non-physical, from consciousness.
If that’s true, then consciousness doesn’t require a three-pound brain to exist, Tucker says, and so there’s no reason to think that consciousness would end with it.
“It’s conceivable that in some way consciousness could be expressed in a new life,” Tucker says.
Robert Pollock, director of the Center for the Study of Science and Religion at Columbia University, said scientists have long pondered the role observation might play in the physical world, but the hypotheses about it are not necessarily scientific. “Debates among physicists that center on the clarity and beauty of an idea but not on its disprovability are common to my mind, but are not scientific debates at all,” says Pollock. “I think what Planck and others since who have looked at how these very small particles behave, and then made inferences about consciousness, are expressing a hope. That’s fine; I hope they are right. But there’s no way to disprove the idea.” ”
https://uvamagazine.org/articles/the_science_of_reincarnation
Various concepts from Hinduism are repeated in various branches of modern science, especially but by no means limited to the social sciences. They come through retaining their original meaning through teachers of the new age, while scientists and psychologists tend to apply them symbolically. This is especially apparent in Jungian psychology. Jung is quite popular among new agers nowadays, though he would probably have written it all off as nonsense.
He was an atheist and didn't believe in the validity of these symbols beyond their use as psychological metaphors, but he's honored as a guru in India along with Sigmund Freud.
However, as Dr Robert Pollock said in regards to Tucker's work, all such theories are unfalsifiable, and will remain so until consciousness itself is truly understood. Of course, with the continual advances in technology, we're probably not far off from that at all. On that note, Alatheia Today's David Cowles will provide one of our last quotes.
“Traditionally, the ‘problem of consciousness’ has been focused exclusively on Homo sapiens, but recently, that’s had to change. Strong evidence has emerged suggesting that members of many other species could pass a modified Turing Test. And don’t get me started on AI!
Bonobos and chimpanzees are ‘no-brainers’, pardon the pun. Marine animals as well (e.g., dolphins). Corvids (crows and ravens), Parrots, and Cephalopoda (e.g., octopus), very likely. Recent exploration has detected intriguing signs of self-awareness in multiple species of fish, insects, and plants. Now, Nature (11/1/23) has published a study suggesting that the presumably ‘headless starfish’ is actually ‘bodiless’; Max Headroom!
Everywhere we look in the biosphere, we seem to find evidence of mental functioning, self-awareness, consciousness, or at least proto-consciousness. Recent efforts to reduce consciousness to a ‘neural network’ have failed, both scientifically and philosophically, and recent discoveries make that hypothesis less and less defensible.
Today, mechanism is on the run! The imputed connection between physiology and consciousness is growing ever thinner. Vastly different versions of ‘sensory processing apparatus’ seem to support very similar mental phenomena.
For centuries, science has focused on removing all traces of ‘spirituality’ from biology. No more ‘soul’, no more élan vital. Consciousness was to be mechanism’s final frontier. The locker room was decorated, caterers on site, champagne on ice, but, as it turns out, there’s nothing to celebrate.
Just as people were confidently predicting the final coup, things began ‘slip sliding away’. It’s becoming clear that ‘something’s happening here, and we don’t know what it is’ (Bob Dylan).”
Ironically, as much has been done and as much has been learned, a 25-year science wager on this very subject came to an end. In 1998, neuroscientist Christof Koch bet philosopher David Chalmers that the mechanism by which the brain’s neurons produce consciousness would be discovered by 2023. Both scientists agreed publicly on 23 June, at the annual meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, that it is an ongoing quest — and declared Chalmers the winner.
What ultimately helped to settle the bet was a study testing two leading hypotheses about the neural basis of consciousness, whose findings were unveiled at the conference.
“It was always a relatively good bet for me and a bold bet for Christof,” says Chalmers, who is now co-director of the Center for Mind, Brain and Consciousness at New York University. But he also says this isn’t the end of the story, and that an answer will come eventually: “There’s been a lot of progress in the field.”
Consciousness is everything that a person experiences — what they taste, hear, feel and more. It is what gives meaning and value to our lives, Chalmers says.
Despite a vast effort, researchers still don’t understand how our brains produce it, however. “It started off as a very big philosophical mystery,” Chalmers adds. “But over the years, it’s gradually been transmuting into, if not a ‘scientific’ mystery, at least one that we can get a partial grip on scientifically.”
Koch, who holds the title of meritorious investigator at the Allen Institute for Brain Science in Seattle, Washington, began his search for the neural footprints of consciousness in the 1980s. Since then, he has been invested in identifying “the bits and pieces of the brain that are really essential — really necessary to ultimately generate a feeling of seeing or hearing or wanting”, as he puts it.
At the time Koch proposed the bet, certain technological advancements made him optimistic about solving the mystery sooner rather than later. Functional magnetic resonance imaging (fMRI), which measures small changes in blood flow that occur with brain activity, was taking laboratories by storm. And optogenetics — which allowed scientists to stimulate specific sets of neurons in the brains of animals such as nonhuman primates — had come on the scene. Koch was a young assistant professor at the California Institute of Technology in Pasadena at the time. “I was very taken by all these techniques,” he says. “I thought: 25 years from now? No problem.” ”
This publication from Nature's Mariana Lenharo goes on to explain the sort of projects both men have been involved with over the interim.
“Around that time, both researchers had become involved in a large project supported by the Templeton World Charity Foundation, based in Nassau, the Bahamas, that aimed to accelerate research on consciousness.
The goal was to set up a series of ‘adversarial’ experiments to test various hypotheses of consciousness by getting rival researchers to collaborate on the studies’ design. “If their predictions didn’t come true, this would be a serious challenge for their theories,” Chalmers says.
The findings from one of the experiments — which involved several researchers, including Koch and Chalmers — were revealed on Friday at the ASSC meeting. It tested two of the leading hypotheses: integrated information theory (IIT) and global network workspace theory (GNWT). IIT proposes that consciousness is a ‘structure’ in the brain formed by a specific type of neuronal connectivity that is active for as long as a certain experience, such as looking at an image, is occurring. This structure is thought to be found in the posterior cortex, at the back of the brain. GNWT, by contrast, suggests that consciousness arises when information is broadcast to areas of the brain through an interconnected network. The transmission, according to the theory, happens at the beginning and end of an experience and involves the prefrontal cortex, at the front of the brain.
Six independent laboratories conducted the adversarial experiment, following a preregistered protocol and using various complementary methods to measure brain activity. The results — which haven’t yet been peer reviewed — didn’t perfectly match either of the theories.
“This tells us that both theories need to be revised,” says Lucia Melloni, a neuroscientist at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany, and one of the researchers involved. But “the extent of that revision is slightly different for each theory”.”
Well, in an essay that has been nearly the length of an age in itself, I've outlined many of the problems and enigmas that lie at the edges of science. The “psychosocial” phenomenon of UFO's is another rabbit hole which I've already explored extensively elsewhere. I still find it interesting that, with all the variety in shapes and sizes of the objects and (in cases of close encounters of the third kind) occupants, it's interesting that basically all the “abductees” who claim to have been communicated with in any way during the experience all come back with the same message: all is one, and the self is a primitive illusion we humans just haven't grown out of yet.
This is far from the only place in the world where such a message is found. It comes from far more terrestrial and “reputable” sources than the Greys… but it's found high and low in the “ufology” community.
https://m.youtube.com/watch?v=iBvbbicuSJo
It's interesting how the tension between this Hindu-esque “message from the cosmos” and traditional Cartesian philosophy basically defines the tension between the two sides of the modern global spectrum: conservatism and progressivism. The Root Cause's David G, featured on the group post, cuts to the heart of the matter here.
“The word “progressive”, a noun, is derived from the root word “progress”. As per the Merriam-Webster dictionary, the word ‘progress’ encapsulates not merely forward movement but an onward march towards a destination. Progress embodies an advancement towards a better, more complete, or more modern state. Progress is also a verb that beckons us to move forward, onward, and upward in both space and time.
The word “conservative”, a noun, derives from the root word “conserve” or “to conserve”, a verb that means to preserve things as they are, keeping things unchanged, even in a pristine state. To conserve is to preserve, protect, and maintain things as they are and as they have been.
Amidst the American cacophony, a recent ABC News/Ipsos poll reveals a resounding sentiment—one of disquiet. A staggering 76% opine that the nation veers off-course, a disconcerting chorus. Only a meager 23% perceive a trajectory aligning with their aspirations.
So, dear America, I pass the rhetorical baton to you: Do you crave change, the anthem of progress? Or is it the embrace of conservation, a desire to retain the familiar? The pendulum of choice is swinging now with a greater urgency and amplitude than ever.
The future awaits your verdict.”
In a world so deeply divided that the chaos almost seems synchronized, is it any wonder that Mark Zuckerberg is building a supervillain-level fortress with a luxurious doomsday bunker beneath? Is the notion of predictive programming so outlandish?
Another related question I HAVEN'T already asked dozens of times in this vast essay is, will Christianity survive the next century?
Dr Eitan Bar (a messianic rabbi) gives five primary reasons why the world's largest religion is currently undergoing a decrease in numbers, or a great “falling away” from the faith (2 Thessalonians 2:3). He attributes it to people “deconstructing” their faith, which is something people are prompted to do in more ways than ever in these times, and explains them in terms of context: theological context (growing up in ultra-strict, myopic denominations or otherwise receiving a warped understanding of God), cultural context (critical thinking versus belief in God), emotional context (being on the receiving end of abuse from someone who claims to represent God), the context of God's character (God's actions in the Old Testament), and traumatic life events. Some of these factors are timeless aspects of the human experience, others are distinct aspects of the modern age. Dr Bar then goes on to name yet another reason, very separate from the first five.
“From a Christian theological perspective, while some argue that our shortcoming is the primary force separating us from God, I contend that Satan plays a more crucial role. Satan strategically exploits sociological, psychological, and emotional vulnerabilities to sow division among God’s creatures, increase doubt among people of faith, disseminate false doctrines, and exacerbate human suffering. In essence, we find ourselves as the metaphorical chess pieces in a cosmic duel between God and Satan.
The evidence of Satan’s effectiveness is especially apparent in the deep divisions he fosters, not just between religious and secular communities but also among people of faith. Throughout history, for example, Christians have persecuted Jews, and even within Christianity, divisions run deep—evangelicals often harbor disdain for non-evangelicals, and vice versa. This encourages many Christians and seekers alike to run away from religion. Again, it’s hard to blame them.”
I don't know if it's a good or bad thing that Christianity is dwindling. I know plenty of people who fall under both major schools of thought on the matter. Regardless, the Twenty-first Century is when everything changes… or so they say.
We're definitely ending this on a rather gloomy note, so I'll provide one last quote to brighten things up.
“I’ve realized why pessimism sounds smart: optimism often requires believing in unknown, unspecified future breakthroughs – which seems fanciful and naive. If you very soberly, wisely, prudently stick to the known and the proven, you will necessarily be pessimistic.
No proven resources or technologies can sustain economic growth. The status quo will plateau. To expect growth is to believe in future technologies. To expect very long-term growth is to believe in science fiction.”
Since this will be getting published on Christmas Day 2023, let me congratulate you if you've managed to read this far. I intend to leave this essay as more of an archive for people to randomly stumble upon rather than expecting anyone I know to take the time; I don't expect that at all.
Regardless, these bloviating speculations may provide entertainment and even a little worthwhile knowledge for someone, eventually. I suppose it depends on how 2024 plays out. Happy New Year, and see you on the other side.


