One of the largest challenges a human mission to deep space would face would be the inevitable communications delay caused by the vast distances between Mission Control on Earth, and the spacecraft carrying the crew. During the Apollo missions, in which American astronauts visited the moon, that communications delay was only around 1.3 seconds each way, meaning that you’d need to wait 2.6 seconds (1.3 for your message to get there, and 1.3 for their response to come back) to get an answer to your question.
Insignificant as that delay may seem, it will eventually grow to a full 90 minutes or so for the crew of a trip to Mars, meaning the astronauts would not be able to rely on the collective expertise and computing power offered by our ground-based space infrastructure. Emergency course corrections, in three dimensions and with limited fuel, would require the ability to instantly complete complex calculations with little to no margin for error. While there are rumors of early Gemini astronauts doing just that at least once, a mission to Mars – or further – would require the ability to make these decisions near-instantly; something no human being may be able to do.
And that’s where supercomputers come in. NASA believes part of the solution to the problems caused by the communications delay is simply to equip deep-space missions with enough computing power to not have to rely on ground-based equipment. A powerful enough supercomputer, undoubtedly coupled with some level of artificial intelligence, could feasibly make the calculations and take action fast enough to save the crew of the ship, before the first astronaut was even completely aware of the danger.
This goes along with the space station’s mission to facilitate exploration beyond low Earth orbit,” Mark Fernandez, HPE’s leading payload engineer for the project, said. “If this experiment works, it opens up a universe of possibility for high performance computing in space.”
It’s with this technological progression in mind that SpaceX is about to deliver a one teraflop “supercomputer” to the International Space Station. Here on earth, it’s not uncommon to find high-end laptops with a teraflop of computing power, but the mission, slated for Monday, will mark the delivery of the most powerful computer we’ve ever sent into orbit – as the ISS has been in use for nearly twenty years, and has sometimes been slow upgrade.
Unlike most computers sent into space, this Hewlett-Packard Enterprises (HPE) “supercomputer” has not been outfitted with radiation shielding intended to protect it from the types of radiation the ISS often experiences throughout its orbit. Instead, they hope to observe the function of the computer by conducting regular benchmark performance tests to see how the radiation effects its ability to function.
Meanwhile, an identical control computer will go through the same benchmark testing at the same intervals back here on Earth, allowing researchers the ability to see exactly how radiation changes computing performance, as compared to normal wear and tear over time. The data gathered from this test can give scientists and engineers a better understanding of how radiation may affect our space-born computers on missions of the future, permitting us to find more effective methods of protection, and learn other ways to mitigate the risk.
While relying on supercomputers running advanced A.I. didn’t seem to work out for the crew of the Spacecraft Discovery One from “2001: A Space Odyssey,” it may serve as our best chance at increasing the survivability of long duration space travel, especially if we ever begin setting our sights even further away than Mars. This first teraflop computer may not get us there, but it’s the first important step to one day building a (hopefully friendly) HAL of our own.
One of the largest challenges a human mission to deep space would face would be the inevitable communications delay caused by the vast distances between Mission Control on Earth, and the spacecraft carrying the crew. During the Apollo missions, in which American astronauts visited the moon, that communications delay was only around 1.3 seconds each way, meaning that you’d need to wait 2.6 seconds (1.3 for your message to get there, and 1.3 for their response to come back) to get an answer to your question.
Insignificant as that delay may seem, it will eventually grow to a full 90 minutes or so for the crew of a trip to Mars, meaning the astronauts would not be able to rely on the collective expertise and computing power offered by our ground-based space infrastructure. Emergency course corrections, in three dimensions and with limited fuel, would require the ability to instantly complete complex calculations with little to no margin for error. While there are rumors of early Gemini astronauts doing just that at least once, a mission to Mars – or further – would require the ability to make these decisions near-instantly; something no human being may be able to do.
And that’s where supercomputers come in. NASA believes part of the solution to the problems caused by the communications delay is simply to equip deep-space missions with enough computing power to not have to rely on ground-based equipment. A powerful enough supercomputer, undoubtedly coupled with some level of artificial intelligence, could feasibly make the calculations and take action fast enough to save the crew of the ship, before the first astronaut was even completely aware of the danger.
This goes along with the space station’s mission to facilitate exploration beyond low Earth orbit,” Mark Fernandez, HPE’s leading payload engineer for the project, said. “If this experiment works, it opens up a universe of possibility for high performance computing in space.”
It’s with this technological progression in mind that SpaceX is about to deliver a one teraflop “supercomputer” to the International Space Station. Here on earth, it’s not uncommon to find high-end laptops with a teraflop of computing power, but the mission, slated for Monday, will mark the delivery of the most powerful computer we’ve ever sent into orbit – as the ISS has been in use for nearly twenty years, and has sometimes been slow upgrade.
Unlike most computers sent into space, this Hewlett-Packard Enterprises (HPE) “supercomputer” has not been outfitted with radiation shielding intended to protect it from the types of radiation the ISS often experiences throughout its orbit. Instead, they hope to observe the function of the computer by conducting regular benchmark performance tests to see how the radiation effects its ability to function.
Meanwhile, an identical control computer will go through the same benchmark testing at the same intervals back here on Earth, allowing researchers the ability to see exactly how radiation changes computing performance, as compared to normal wear and tear over time. The data gathered from this test can give scientists and engineers a better understanding of how radiation may affect our space-born computers on missions of the future, permitting us to find more effective methods of protection, and learn other ways to mitigate the risk.
While relying on supercomputers running advanced A.I. didn’t seem to work out for the crew of the Spacecraft Discovery One from “2001: A Space Odyssey,” it may serve as our best chance at increasing the survivability of long duration space travel, especially if we ever begin setting our sights even further away than Mars. This first teraflop computer may not get us there, but it’s the first important step to one day building a (hopefully friendly) HAL of our own.
Image courtesy of NASA
COMMENTS
There are on this article.
You must become a subscriber or login to view or post comments on this article.