Skip to main content

Scientists develop AI that can listen to the pulse of a reef being restored

From - Mangabay Magazine,

By -  Cassie Freund,


  • Scientists have developed a machine-learning algorithm that can distinguish healthy coral reefs from less healthy ones by the soundscape in the ecosystem.
  • Previous studies had established that the sounds of life in a successfully recovered reef are similar to those from a healthy reef, but parsing all the acoustic data was slow and labor-intensive.
  • The new algorithm has been hailed as “an important milestone” for efficiently processing acoustic data to answer the basic question of how to determine the progress of a reef restoration program.
  • Researchers say follow-up work is still needed, including to check whether the algorithm, tested in the Pacific Coral Triangle, also works in reefs in other parts of the world.

Healthy coral reefs, with their brightly colored corals and bustling schools of fish, are easy to spot underwater. New research into the marine soundscapes of reefs in the Sangkarang Archipelago in Sulawesi, Indonesia, shows they’re easy to hear too — for the right algorithm.

The study, led by University College London and Zoological Society of London doctoral student Ben Williams, was published in July in the journal Ecological Indicators.

Williams’s research builds on a 2021 paper led by his collaborator, Tim Lamont, which demonstrated that the diversity of sounds from marine life — the pops, grunts, scrapes, whistles and rattles that fish and other animals make, detectable underwater by underwater microphones called hydrophones — on mature restored reefs were similar to those heard on healthy reefs.

According to Williams, that study was a proof of concept that soundscapes can be used to track reef recovery. But manually listening to the recordings was a slow and labor-intensive process.

Fish and coral reef.
A coral reef in the Marsa Ghozlani dive site in Egypt’s portion of the Red Sea. Image by Renata Romeo / Ocean Image Bank.

Inspired by an assignment from his undergraduate studies, in which Williams was tasked with identifying people diagnosed with heart disease from a complex data set, he turned to machine learning to make sense of the recordings.

“I realized this string of numbers of recordings from a healthy and of a degraded reef was really similar to this string of numbers we had of patients with or without heart disease,” he said.

So he created an algorithm to classify the soundscape data as belonging to either healthy or degraded reefs. Williams then tested his model on a separate group of recordings from three restored reefs in the Sangkarang Archipelago (also known as the Spermonde or Pabbiring Archipelago): two established more than two years ago and one recently restored site. He discovered that most recordings from the older restoration sites had the soundscape of a healthy reef, whereas the majority of recordings from the newly restored site were classified as degraded.

To Aaron Rice, a research scientist at Cornell University’s K. Lisa Yang Center for Conservation Bioacoustics, who was not involved in the study, these findings are an exciting step forward for coral reef restoration.

“Extensive resources and time are spent trying to restore different reef sites,” he said. “The fundamental question is, how do you measure progress and how do you know when it’s done?”

The ability to use acoustic data, which are relatively easy to collect, to answer that question is thus “an important milestone,” Rice said.

Plenty of non-human species have their ears tuned to the sounds of these coral reefs as well. Juvenile fish and coral larvae listen for healthy reef sounds before selecting where to make their homes. If restored sites sound like nice places to settle down, they can attract a new generation of fish, corals, and other marine animals, returning the reef to its former glory. Busy, diverse soundscapes beget busy, diverse ecosystems teeming with marine life.

A reef restoration project underway.
A reef restoration project in the South China Sea. Image by Ansrx via Wikimedia Commons (CC BY-SA 4.0).

Williams observed this process up close while collecting the data for this project.

“The most inspiring thing I’ve ever seen on a coral reef was to go and swim on one of the sites which had been restored several years ago, and it was completely back to brand new. It looks like a really healthy thriving reef,” he said.

The only hint that it hadn’t always been that way was that if he looked closely, he could see the metal frames that had originally been seeded with coral fragments peeking through gaps in the reef. These frames, called reef stars, are a technique used by the Mars Coral Reef Restoration Program to spur coral growth in barren rubble beds.

Both Williams and Rice say that figuring out exactly what sounds distinguish healthy from degraded reefs is the logical next step to this study. Machine learning is often described as a black box because the models are complex and difficult to interpret. Coral reefs are also busy and jumbled audio environments (Rice likens them to the cacophony of New York City). The difference between a healthy and degraded coral reef might be the crackle of a snapping shrimp, the crunch of parrotfish ripping algae off the reef, or even just the way individual sounds propagate through the physical environment — it will take more time and experimentation for scientists to narrow down which are important.

In the meantime, the research team Williams belongs to is sending hydrophones to other restoration sites in places like the Great Barrier Reef, Mexico, and the Maldives, to learn what reef recovery sounds like in other places. The Pacific Coral Triangle, where the Sangkarang Archipelago is located, has the highest marine biodiversity in the world, so the range of sounds that signify healthy reefs there may be different from the Caribbean or elsewhere. In the team’s quest to understand the world’s coral soundscapes, Williams said he’s interested in testing whether one global or several regional machine-learning models will be needed to separate healthy reefs from degraded ones.

Hydrophone deployment at a reef site.
Hydrophone deployment at a reef site. Williams created an algorithm to classify the soundscape data as belonging to either healthy or degraded reefs. Image by National Marine Sanctuaries via Wikimedia Commons (Public domain).

Regardless of the outcome of Williams’s follow-up research, bioacoustic data will remain an important and efficient way to understand marine ecosystems. This same research team has created a relatively inexpensive and open-source hydrophone called a HydroMoth for distribution to those other sites. And hydrophones in general are easy to install and can collect data much longer than researchers can stay underwater. They also record at night and in other ocean conditions that might keep scuba-diving scientists away.

“The idea of listening to coral reefs is either seen as really, really niche, experimental, or too uncertain, and now it’s like — well, no,” Rice said. “These are the kinds of papers that show this is really viable and is ready for prime time.”

But, as Williams is quick to note, successful restoration alone will not save the world’s coral reefs; climate change remains an urgent threat to these vital ecosystems.

“Restoration is brilliant because it can bring back habitat that we’ve lost,” he said. “But if we don’t get a handle on climate change, on greenhouse gas emissions, it’s not going to matter because we’re still forecasted to lose almost all of the world’s reefs by the end of the century if we don’t do that.”

Banner image: A healthy coral reef in Tahiti. Image by Jayne Jenkins / Ocean Image Bank.

Comments

Popular posts from this blog

Why did Homo sapiens outlast all other human species?

  From - Live Science By  Mindy Weisberger Edited by - Amal Udawatta Reproductions of skulls from a Neanderthal (left), Homo sapiens (middle) and Australopithecus afarensis (right)   (Image credit: WHPics, Paul Campbell, and Attie Gerber via Getty Images; collage by Marilyn Perkins) Modern humans ( Homo sapiens ) are the sole surviving representatives of the  human family tree , but we're the last sentence in an evolutionary story that began approximately 6 million years ago and spawned at least 18 species known collectively as hominins.  There were at least nine  Homo  species — including  H. sapiens  —  distributed around Africa, Europe and Asia by about 300,000 years ago, according to the Smithsonian's  National Museum of Nat ural History  in Washington, D.C. One by one, all except  H. sapiens  disappeared.  Neanderthals  and a  Homo  group known as the  Denisovans  lived alongside  H. sapiens  for thousands of years, and they even interbred, as evidenced by bits of their DN

Jared Leto climbs Empire State Building

  By Steven McIntosh Entertainment reporter, Edited by Amal Udawatta IMAGE SOURCE, GETTY IMAGES Image caption, Leto said it was a "nice surprise" to see his mother through the window when he reached the 80th floor He's known for going to great lengths to win an Oscar - and now Jared Leto is going to great heights to promote his band's next tour. The actor and musician has become the first person to legally scale the 102-story Empire State Building. Leto, 51, climbed the outside of the New York landmark in a bright orange jumpsuit and using a rope and harness. He took on the challenge to promote the forthcoming world tour for his band Thirty Seconds To Mars. Leto told NBC's Today show:  "I was more excited than nervous to tell you the truth. But I have to be honest, it was very, very hard. It was a lot harder than I thought it would be. "Just the endurance that it took, the stamina that it took, and it was very sharp." The actor won an Oscar for his

Humans have been speaking for a lot longer than we originally thought

       From -  Independent Magazine      By - David Keys      Edited  by - Amal Udawatta New research has pinpointed the likely time in  prehistory  when humans first began to speak. Analysis by British archaeologist Steven Mithen suggests that early humans first developed rudimentary  language  around 1.6 million years ago – somewhere in eastern or southern Africa. “Humanity’s development of the ability to speak was without doubt the key which made much of subsequent human physical and cultural  evolution  possible. That’s why dating the emergence of the earliest forms of language is so important,” said Dr Mithen, professor of early prehistory at the University of Reading. Until recently, most human evolution experts thought that humans only started speaking around 200,000 years ago. Professor Mithen’s new research, published this month, suggests that rudimentary human language is at least eight times older. His analysis is based on a detailed study of all the available archaeological