keskiviikko 2. huhtikuuta 2025

The quantum effect allows us to research our minds and memories.



"A stunning discovery shows that quantum computation might be embedded in the very structure of life, enabling organisms to process information at mind-boggling speeds – even in warm, wet environments. Credit: SciTechDaily.com" (ScitechDaily, Scientists Just Discovered Quantum Signals Inside Life Itself)


The quantum effect in living organisms is something that we might not even understand. The living systems are complicated. They are full of interference, and their entropy is very high. But otherwise in cells. It can be "deep" micro whirls that allow quantum information to travel through the cell itself. The proteins in the cells can also form so-called quantum channels. There quantum information can travel without interacting with the cell's structures. 

That thing opens new visions about the research cell's internal actions and reactions. But that thing opens new visions to trying to understand things like consciousness and its mechanisms. That quantum phenomenon can open the road to research how things like magnetic fields transform or affect our thoughts and minds. That thing can also be the key to reading our memories and dreams. 



"The computational capacities of aneural organisms and neurons have been drastically underestimated by considering only classical information channels such as ionic flows and action potentials, which achieve maximum computing speeds of ∼103 ops/s. However, it has been recently confirmed by fluorescence quantum yield experiments that large networks of quantum emitters in cytoskeletal polymers support superradiant states at room temperature, with maximum speeds of ∼1012 to 1013 ops/s, more than a billion times faster and within two orders of magnitude of the Margolus-Levitin limit for ultraviolet-photoexcited states. "(ScitechDaily, Scientists Just Discovered Quantum Signals Inside Life Itself)

These protein networks of quantum emitters are found in both aneural eukaryotic organisms as well as in stable, organized bundles in neuronal axons. In this single-author research article in Science Advances, quantitative comparisons are made between the computations that can have been performed by all superradiant life in the history of our planet, and the computations that can have been performed by the entire matter-dominated universe with which such life is causally connected. Estimates made for human-made classical computers and future quantum computers with effective error correction motivate a reevaluation of the role of life, computing with quantum degrees of freedom, and artificial intelligences in the cosmos. Credit: Quantum Biology Laboratory, Philip Kurian" (ScitechDaily, Scientists Just Discovered Quantum Signals Inside Life Itself)




"Yale researchers have uncovered evidence that babies can store memories far earlier than we once thought. Credit: SciTechDaily.com" (ScitechDaily, Your Earliest Memories Might Still Exist – Science Just Found the Clues)


In some models, our first memories are behind things like nightmares. 


Researchers think that the very first memories in our brains still exist. But brains cannot collect them into new entirety. Those first memories are stored in brains where were only a very few neurons if we compare them with adult brains. That means memories scatter around the brain. And maybe. Quantum technology can read those memory allocation units that the first neurons stored. Theoretically, those systems must only recognize those cells, read the data units from those very first memory cells, and then reorder them into the original order. 

Memory cells act like a puzzle. Every piece in the puzzle is an independent memory allocation unit.  Every memory cell holds one part of memory. And cell group handles all of those memories. Every neuron handles only a small part of the image. And if those neurons are far away from each other that makes it hard to restore images.  Thinking means that. Brains reconnect those memory allocation units. When a person gets flashbacks in some stressful situations that means that the non-used neural track is activated. 

There is a model where nightmares are forming in the first memory cells. First memories are behind strange dreams our brains have access to those memories. But they cannot collect them back into their original entirety. 

When we think about information stored in our brains we must realize that the first memories from childhood might not gone or lost. The problem is that our brains advance from childhood. In that process, the number of neurons grows and their connections are multiplying. So our first memories form in brains where there are not very many neurons. When the number of those neurons grows those memories or memory allocation units will go to longer distances than they were in our childhood. Our brains just cannot convert those memories into new entities. 


https://scitechdaily.com/scientists-just-discovered-quantum-signals-inside-life-itself/


https://scitechdaily.com/your-earliest-memories-might-still-exist-science-just-found-the-clues/

The new plasma thruster uses water as a propellant.


"Florida-based firm Miles Space has demonstrated a water-fueled electric thruster with very low power demands."(Interesting Engineering, Florida startup tests water thruster, runs on just 1.5W for orbital maneuvers)

"The company tested its technology on a European satellite in September 2024. During the flight test, Miles Space’s Poseidon M1.5 thruster produced 37.5 millinewtons of thrust for five minutes at a specific impulse of 4,800 seconds, while drawing power of 1.5 watts." (Interesting Engineering, Florida startup tests water thruster, runs on just 1.5W for orbital maneuvers)

"The thruster fits into a one-unit cubesat and could be used for applications like descent from low-Earth orbit." (Interesting Engineering, Florida startup tests water thruster, runs on just 1.5W for orbital maneuvers)


There are many types of more-or-less practical plasma thrusters. 


Water is a good propellant for rockets. It's cheap, non-toxic, and a common material. Water is not like hydrogen which requires pre-processing. And that makes water easier to handle than hydrogen which must be separated and then turn into a very low temperature. The engine must just expand the water. Basically, the same thruster can use any other liquid from hydrocarbons to hydrogen. 

The system can heat the liquid using an electron, or some other particle beams, electric arcs, lasers, or microwave systems. In some models, the system uses antimatter to boil propellant. The system requires only electricity to create the system that creates the plasma. The system can accelerate plasma by using magnets. 

Plasma thrusters can get their energy from sunlight or from nuclear reactors. One form of so-called solar- or light-sails is the mirror that focuses sunlight into the rocket's engine chamber. There that system can expand propellant. The parabolic mirror can aim sunlight at the carbon fiber structure. Then the system can inject hydrogen into the chamber. There that carbon fiber structure heats the propellant. 

In the most extreme versions, the system uses a laser that can get its energy from the sunlight. The mirror system collects energy and focuses it on the laser element. Then laser beam vaporizes water at hypercritical temperature. There that water turns into plasma. 

When we think about the most exotic versions of that kind of system the engine can use some kind of electrolytic system. The system injects a water ball into the engine chamber. 

The electrolytic system can break water molecules and then the positive and negative electrodes pull those ions and anions into the different directions. When hydrogen ions travel to the anode and oxygen travel to the cathode that makes it possible to create a very exotic ion engine. The oxygen travels to the plate at the front of the chamber. Hydrogen ions can travel out from the engine through the acceleration tube. 

That forms asymmetry in the power. The positive particles travel to the plate where they can from the push. And then another, negative particles travel back from the system from time there they don't create thrust. Those poles in the system can be opposite. Those kinds of ion engines are interesting tools. 


https://interestingengineering.com/innovation/startup-tests-water-fueled-plasma-thruster?group=test_b

The supersonic flight turns metal bonds weaker.

 


Above: North American X-15 in wind tunnel test. 

We know that friction weakens materials. Things like metal structures are vulnerable to heat. The reason for that is that metal structures are not solid and homogenous structures. The friction forms heat that destroys the metal structures. In the second image (Image 2),  we can see the aluminum crystalline structure. We can see that those are not in perfect symmetry. But the structure looks a little bit like a diamond (Image 4). That atomic structure makes aluminum very suitable for aviation. The problem is that the real bonds that are marked as grey tubes don't follow the route of the theoretical bonds that are marked by a black dash. If aluminum atoms form the boxes or structures like carbon in a diamond. That makes it stronger. 




Image 2. Crystalline structure of aluminum. 


However the structure can be more effective if those aluminum atoms can form a perfect box structure that continues homogenously over the entire trunk. Things like nanotubes can transport energy out of the structure. The best solution for nanotubes is that they are horizontally through the metal structure. If there are no connection points. That makes energy travel better through those tubes. 

The image 3 shows the problem of energy in the 3D surfaces. We can see that there are potholes in that structure. And that causes energy asymmetry in this lattice. 

The potholes and hills in structure cause differences in energy levels. Make energy travel to the lower energy points. And that forms standing waves that push atoms away. 

There are two ways to make the material strong. One is nanotubes and one is to make metal extremely pure. 

The structure is like boxes. And that allows the metal to dump energy into those boxes. That energy forms a standing wave that breaks the structure sooner or later. The thing that breaks the structure is the reflecting wave from the metal crystal. When we compare that structure with the diamond's carbon structure.



(Image 3) The polarization in lattice. The polarization under laser ray. Tells about the energy levels in the lattice. 

 We can see that the diamond's dodecahedron structure (Image 3)allows energy to travel out from the structure more easily than from the metal. If the energy level in the top carbon is lower than the bottom carbon. That increases the energy flow through a diamond. 


There are small metal crystals and bites of dross in the metal structure. When heat transfers to those structures. It causes standing waves into the layers. When energy travels into those small crystals. They store that energy inside them. Sooner or later. Energy levels in those metal structures turn higher than in the environment. That energy destroys the material structures. 




(Image 4) Diamond crystalline structure. 


We know that. To keep material in its form. There must be someplace. There the material can put that energy. The reason why carbon fiber stands better at supersonic speed is that it is fiber. In supersonic speed the air pressure pushes carbon fiber against the wing. If that fiber goes over the wing it can transport more energy to air. 

The next question is where that energy dump can put that energy. One answer can be the nanodiamonds. That can transport energy out from the metal. Another answer to the heat problem can be nanotubes that can conduct energy out of the structure. The system works that way so that there is a lower energy area behind the aircraft. 

The nanotubes can transport energy out from metal structures if they continue over the entire airplane's body. Things like electron beams can also operate as the thermal pump that transports energy out from the structure. 

 https://interestingengineering.com/innovation/supersonic-speed-weakens-metal-bonds-strength-peaks-at-1060-m-s-study-finds?group=test_b


tiistai 1. huhtikuuta 2025

Quantum leaps make quantum computers more advanced.


"An artist’s impression of an analog quantum computer in which atoms are manipulated by lasers to simulate quantum many-body systems. Credit: Image courtesy of Nikita Zemlevskiy, Henry Froland, and Stephan Caspar" (ScitechDaily, Quantum Computers Take a Leap Toward Accurate Nuclear Simulations)

The new quantum computers take a leap into accurate atomic simulations. That improves material research. New materials like ultra-thin superconductors will turn quantum computers into new more compact forms. 

That means that. Maybe in the future. 

Quantum computers and their coolers can fit in normal rooms. And that is the big advancement. 

And many other things. Like fusion reactors and engine technology. 

New quantum computers are the ultimate tools in very complex calculations. 

There is a point in the formula's complexity. Where the quantum computer becomes more effective than binary computers. 

The ultimate complex flow simulations between atoms, ions, and electromagnetic force's interaction with natural forces require ultimate complex calculations. Atoms are complex entireties where four nuclear forces interact with electrons, hadrons, and quarks, that form hadrons. Those simulations leap the quantum computer advancements. 

The quantum computers will turn so trustable. That they can be used to develop themselves. 

The AI and quantum computers are the ultimate combination. 

That makes those systems development faster and more effective than before.

The most effective calculators in the world are tools that can be used to calculate the qubit's behavior in the quantum system. If that behavior can be controlled or predicted that makes quantum computers trusted. The problem is this: if the quantum entanglements in the system are broken without warning that means the data is lost. Quantum computers are good tools for running AI and especially for calculating large entireties. 

The thing that breaks even the simplest systems is the entropy. We can think that quantum computers transmit data in a structure that looks like two yarn balls there the string transports information between them. We can think that those strings are like carpets. We can push them forward if they are straight. But then if there is a wave that stretches the carpet. 

In quantum systems, the energy travels back to the string. That transports data in the qubit. That forms quantum superposition and entanglement. 

That makes waves between those particles. The wave. That forms in the string or belt that transports information warps the string. That string is not a homogenous structure. There are multiple substrings. And when one of those strings that are like a flat cable where wires are separated turns into curves that causes entropy. There is entropy even in the simplest systems. And that entropy destroys or messes up information. 

The qubit, quantum superposition, and quantum entanglements are harder to control than nobody predicted. In quantum entanglement, data travels from a higher energy level to a lower energy level particle like the belt. The problem is that those particles form the Moiré pattern when they are put into the superposition and entanglement. That position makes the energy wave in the string or belt that transports data in the superposition and entanglement. 

The problem with quantum entanglement is this: Information can travel only from the higher energy level to the lower. So, also energy level between the string that carries data must be higher than the receiving particle. The Moiré effect causes that energy to jump back to the string that carries data. 

That effect forms a standing wave between that string and the receiving particle. That pushes the string away from the receiver. This means there is impossible to transport information between particles in quantum entanglement with 100% accuracy. That inaccuracy makes it hard to control the quantum entanglement. 


https://scitechdaily.com/quantum-computers-take-a-leap-toward-accurate-nuclear-simulations/


maanantai 31. maaliskuuta 2025

Lightsails are coming.



"In a potential step toward sending small spacecraft to the stars, researchers have developed an ultra-thin, ultra-reflective membrane designed to ride a column of laser light to incredible speeds. (Artist’s concept.) Credit: SciTechDaily.com" (ScitechDaily, Breakthrough Lightsail: Ultra-Thin, AI-Optimized, and Ready to Race to Alpha Centauri)

Ultra-thin and strong materials are things that can make a large solar- or lightsail true. Lightsails offer a cheap and easy way to transport spacecraft to the asteroid belt. Materials used in lightsails can offer a choice to take asteroids back to Earth. The vehicle uses lightsail to travel to the asteroid belt. Then that lightsail turns into a bag and the craft wraps the asteroid. And then it can ignite its rocket. 

Theoretically, solar- or light sails can even travel to Alpha Centauri. The lightsail gets its energy from the sun. The particle flow from the sun pushes that system out from the sun. The system can use magnetized materials to collect those ions more effectively. And there can be lasers on its road that shoot laser impulses to a lightsail. However, the magnetized material allows to use of ion cannons that shoot ions to the lightsail. Thin magnets can collect those particles into the sail. The lightsail can research the solar system. 

And ions give more power. In the most powerful version called "Medusa," the system uses a lightsail in the inner solar system. There are thermonuclear bombs like hydrogen bombs on the sails route. Those nuclear bombs can be shot to those positions using different spacecraft. And when lightsail travels past them those systems will be detonated. That gives punch to that spacecraft on the journey to Proxima Centauri. 

Lightsails are very large structures. If that material is covered using metals the lightsail's sail structure can be used as a communication antenna. That makes it possible to create large-size radar antennas. And maybe in the future, that kind of system can research the solar system. The lightsail technology can also used in radio telescopes, radar, and electronic recon satellites.  

Radio telescopes and electronic recon satellites are the same systems. The large antenna can capture weak signals. So the next-generation electronic recon satellites can use lightsail's material technology in their structures.  The same materials can be used in large-scale weather-protecting structures on Earth.  They can be used to cover camps and even yards in everyday life. 

https://scitechdaily.com/breakthrough-lightsail-ultra-thin-ai-optimized-and-ready-to-race-to-alpha-centauri/

Writing about machine view. (That's a multipurpose tool)



The same system that recognizes and sorts garbage can also recognize everything else that we teach it to recognize. 

What else would you do with the system that we can teach to recognize and sort garbage? What if that system sends information about that item to some central? What if the system sends the location where that garbage is? We can teach the system to recognize things like drug syringes and then that thing sends the image and location to the center. The system can see if some person carries a gun and then send that information and the image to the central. The same system can also used to select things from the warehouse. And then the person can sort them into the right boxes. 

Many AI products are double-use. This software can be used for many purposes. And when we think that the software is made for some normal things like recognizing garbage and then giving instructions where to put that carbage we face another interesting thing. If we should teach the AI that thing we can also teach the same AI many other things. The AI doesn't think. That means we can take images of people from the TV or net. And then we can use those images to recognize people from the streets. We can teach that system to recognize almost everything. It can recognize cars, people, and many other things. 

Those algorithms might officially used for some other purposes. But the same system that can recognize and sort garbage. Can recognize and sort everything else. The thing is that we can teach that same system to recognize and sort almost everything. And that kind of system can also serve law enforcement and military work. 

When we make a system that recognizes something and puts it into the right locker, we can make a system. There that functions and activates some handler. The machine view just activates some action when it sees something that matches with some image. 

The purpose where we can use software that recognizes things like bottles or garbage and send that information to the city garbage unit can be used to recognize things like vehicles and cannons and send that information to the military leaders. The same software can recognize plastic bottles and sort them into the right recycling box. 

And it can also recognize tanks from the battlefield. And then that thing can even sort those tanks by their marks and types. Then the system can select the right ammunition to destroy that thing.  This kind of software is suitable for military work. As well as its original purpose is something else. 


Calculation power doesn't itself mean that the LLM turns into AGI.



When we think about large language models (LLM) and artificial general intelligence (AGI), we sometimes forget that AGI is an extended version of the LLM. The LLM can handle any mission. We can imagine that it has the right dataset. Then we face another thing. We sometimes forget that AGI just handles the data that it has. The system connects data into new forms like puzzles. To generate things the AI requires data that it puts in the new order. The biggest difference between AGI and LLM is the scale of questions that they can answer. The system is productive if it has data that it can handle. And that is one of the things that we must realize. 

The AI systems are impressive. But they are also computer systems. Those systems have two layers. The "iron" or physical layer. And software layer. The AI can run on separate programs or be integrated into the operating system.  Or the AI algorithms can operate on the kernel when the AI software is loaded into microchips. That thing can seem like "iron-based AI". But it is software, like all other AIs. 

When we think about AI and its shape. 

We must realize. That even the best systems like human brains are useless without information. 

The software sorts information like puzzles. We call that process using the name: "thinking". Human has two thinking speeds. The fast and slow. The slow is the analytic and the fast is like reflex. 

Computers are useless without programs. Those programs are algorithms that the AI uses to control data. The thing. That makes it hard to make cognitive AI seem simple to solve. 

We must make a system that learns like humans. And then mimic that process in the system. When we make a robot that reads a book and then stores that information in its memory we must realize that there are things like program code that the computer can handle quite easily. 

It simply sees the code and then compiles it with its data. The system sees the details or attributes that make the database controller search for the right database that involves the right programming language like C++ etc. But when the system must handle abstract or non-certain data there is a problem. When the system learns something by watching movies the system must put the data to match with things that it sees on streets. 

The problem is that the computer doesn't think. We can show anything like movies about some circus artists and tell that those people that the computer sees are "boxes". The computer can have details about things like boxes. But then it can connect those people in the database there are boxes. That might seem ridiculous. But that is possible. Same way if the robot gets the order to get the car. 

A robot walks to the street and takes the first car. If there is no program. That makes the robot choose only the car that its master owns. The robot doesn't itself make a difference between, a car, lorry, van, or truck. For a robot, they are all cars. So if the robot must get the car to take some sand, it can go to the nearest car and then take it. Then robot can simply put sand on the trunk. The robot must have information about what type of vehicle it needs. For carrying sand. 

That means we must also develop programs. That we can create AI that doesn't make surprises. The large language models are quite a new tool. Advancing is fast. But then we must realize that the road to the AGI can be longer than we expect. Or it can become a reality sooner than we expected. And then we must also understand that there is not a single person that can ask all possible questions in the world. The human's knowledge is limited to data. That human is stored. And there is no "General person" who knows everything in the world. 


sunnuntai 30. maaliskuuta 2025

Will LLM lead to artificial general intelligence, AGI?



"Artificial general intelligence (AGI) is a type of highly autonomous artificial intelligence (AI) intended to match or surpass human capabilities across most or all economically valuable cognitive work. This contrasts with narrow AI, which is limited to specific tasks.[1] Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI." (Wikipedia, Artificial general intelligence)

When we think about the AGI and its relationship with humans we can say that the AGI is only an extremely large language model, LLM. That means that to turn into AGI the LLM requires only an extremely large database structure. That kind of database structure is hard to drive. But it's possible. The AGI makes "droplets" of the small language model, SML for each mission. 

We can put all our equipment under the control of the large language model, LLM. Those systems require the computer and socket that the LLM can use to control them. In the traditional model, every single device and system. that we have can involve the computer the system that drives the vehicle on the road or cleans our house. The artificial general intelligence, AGI requires those sockets to control the vehicle. When the user says "Car come here" the AGI locates the person. And gives instructions to the vehicle that drives to pick up the person. 

In that model, the AGI requires that we update all things. That we have. But then we can make another way to create things. That thing is a humanoid robot that has an extremely large database structure. That system can operate in every situation that we can. So, we can say that the AGI is only a large-scale LLM. The robot has only a small computer. However, the internet allows it to communicate with data centers. When a robot gets a new mission the data center generates the data structure or dataset that the robot needs. 



And then creates a more limited, but compact database for that robot. In that model, the LLM creates a series of SLMs to make the robot operate in situations like visiting shops. 

The robot can use three, or four datasets in that mission. 

First, the robot must go out. Then it deletes the "home" dataset and uploads the "walk to shop dataset". 

At the shop, the robot changes the dataset to "operate at the shop". 

Then it changes the dataset to "go home and carry things that you bought". And finally, the last needed dataset involves data that the robot must do when it takes its shopping to the kitchen. In this model, every skill. That the robot has a different database structure or dataset. The central computer cuts a small part of its master data to the dataset series that the robot needs for missions. The humanoid robot is the thing. That can use all kinds of stuff. 

The system can use old-fashioned cars, trucks, and hovering machines. And we can say that the humanoid robot is the socket that can connect everything to the Internet. The robot can use machine learning to find new skills. If a robot must fix the TV it must only know the model of the TV. And then it makes checks like cable checks and other things. Just like humans. 

When we talk about AGI we must realize that even humans cannot do everything. We need to practice everything. When we face some system that is unknown to us, we must read instructions. When a robot learns something it creates a new dataset. And if some data center handles thousands of robots one of them can learn new things and then the system can scale that dataset all over the network. 



Will LLM lead to artificial general intelligence, AGI? That is a good question. The answer depends on what we mean about the AGI. If we think about the situation we can use every single vehicle, that we see. From microwave ovens to taxi cars and street sweeper robots using the AI that asks when we want that street sweeper to clean our yard and robot taxi when it comes to get us we can say that the LLM can be the AGI. The street-sweeping robot can also ask if we need help with our baggage and maybe that same robot can cut our hair. 

Then we must say that the robot that we see in front of us can make everything that we ask. It can turn to cab drivers cutting our hair cleaning our homes and making pizza for us. That kind of robot has a large scale of skills that it can use in every situation. Every skill that a robot has is a database in a large database structure. The central computer can upload the right dataset to a robot. That it controls. 

When we think about the order "pick up my baggage and take them to the car and then drive car to pick me up". The fast internet makes it possible to download and change needed datasets in the robot computer's memory. In that case, the AI. That control's robot. Creates a small language model, SLM for each stage of the mission. The SLM is a compact, reflex version of the LLM. The system can use SLM in cases where the robot requires fast reactions. 


https://scitechdaily.com/quantum-computers-just-got-smart-enough-to-study-their-own-entanglement/


https://en.wikipedia.org/wiki/Artificial_general_intelligence


lauantai 29. maaliskuuta 2025

The mystery of Mars is growing.



"This graphic shows the long-chain organic molecules decane, undecane, and dodecane. These are the largest organic molecules discovered on Mars to date. They were detected in a drilled rock sample called “Cumberland” that was analyzed by the Sample Analysis at Mars lab inside the belly of NASA’s Curiosity rover. The rover, whose selfie is on the right side of the image, has been exploring Gale Crater since 2012. An image of the Cumberland drill hole is faintly visible in the background of the molecule chains. Credit: NASA/Dan Gallagher" (ScitechDaily, Life on Mars? NASA’s Curiosity Rover Finds Prebiotic Clues in a 3.7-Billion-Year-Old Rock)

The new observations about Martian rocks uncover long carbon molecules. Those molecules can be the remnants of ancient life. Or maybe some other thing formed them. When probes research red planet. we can see ancient lakes and rivers. 

Today those lakes and rivers are all gone. The reason for that is that the Mars' atmosphere is very thin. And that allows cosmic radiation to impact Mars's surface. 

There could be bacteria or proto-bacteria on that planet. But then some cosmic catastrophe like an impact with some big asteroid or protoplanet blew lots of material from the red planet to space. And it's possible that this impact also pushed Mars away from its original place. But was Mars closer, or outer from the sun? That is a good question. Another good question is: how big Mars was before those cosmic catastrophes? There were many catastrophes in the Mars-planet youth. 


"A view of Mars’s north polar cap, reconstructed from various spacecraft data. The spiral troughs that dissect the cap are visible. Credit: NASA/Goddard Space Flight Center Scientific Visualization Studio" (Astronomy, Ghost rivers, hidden lakes: The long search for water on Mars)


This river delta looks like a pituitary gland. "Glacier-like features, where a mass of material appears to have flowed downhill between two ridges, hint at where ice probably accumulated in the past in the mid-latitudes of Mars. Credit: NASA/JPL-Caltech/University of Arizona" (Astronomy, Ghost rivers, hidden lakes: The long search for water on Mars)


"A close-up of Mars’ south pole shows a thick ice cap, thought to be made up of frozen water and frozen carbon dioxide. Credit: ESA/DLR/FU Berlin/Bill Dunford" (Astronomy, Ghost rivers, hidden lakes: The long search for water on Mars)

We see only the last version of that planet. The Mars planet didn't form the entire Asteroid belt. But there can be some remnants of the Mars-planet's ancient lithosphere. Those river and lake remnants are formed after the last catastrophe. And it's possible that the cosmic ice ball hit Mars at that time. That causes interesting thinking experiments. The Mars planet's gravity is very weak. If we compare it to Earth's gravity. 

That means the cosmic snowball would not heat so much as if it hit Earth.  So those cosmic snowballs can carry protobacteria to Mars. The big problem is this. Where came from water. What formed those ancient lakes and rivers come from? If we think that the red planet has been heavier and bigger before it lost its lithosphere. 

So, did the planet Mars impact with the icy moon or some icy dwarf planet like Jupiter's Europa moon? That can explain those ancient lakes and rivers. If Mars really lost its lithosphere in some impact it should vaporize the water and throw that water, what existed before the impact to space. 

We know that the biggest asteroid Ceres is mainly water. So there should be other ice asteroids in the asteroid belt. The depth of those oceans is unknown. But if the water that formed the Martian lakes and rivers remnants we see came from somewhere else the best candidate can be the water dwarf planet like Europa. 


https://www.astronomy.com/science/ghost-rivers-hidden-lakes-the-long-search-for-water-on-mars/


https://scitechdaily.com/life-on-mars-nasas-curiosity-rover-finds-prebiotic-clues-in-a-3-7-billion-year-old-rock/


https://en.wikipedia.org/wiki/Ceres_(dwarf_planet)#Internal_structure


https://en.wikipedia.org/wiki/Mars

The new tools are improving communication and microchip technology.



"Schematic of chiral terahertz generation and control: A femtosecond laser interacts with a patterned spintronic emitter, producing elliptically or circularly polarized terahertz waves. Rotating the emitter adjusts the polarization, while built-in electric fields—formed by charge accumulation at the pattern’s edges—control the amplitude and phase differences. Credit: Q. Yang et al., 10.1117/1.AP.7.2.026007" (ScitechDaily,Tiny Stripes Unlock Powerful Terahertz Control for Faster Data and Sharper Scans)

"A revolutionary new spintronic device developed in China enables powerful, precise control of terahertz (THz) wave polarization, without the need for bulky external components."(ScitechDaily,Tiny Stripes Unlock Powerful Terahertz Control for Faster Data and Sharper Scans)


Terahertz radiation and laser technology can boost microchip technology. And robotics with communication. Terahertz radiation is non-ionizing. That means it doesn't disturb its environment like radio waves do. This makes it a suitable tool for miniature microchips. Terahertz communication doesn't cause electromagnetic turbulence the same way as some radio waves. 

And terahertz radiation doesn't interact with radio waves. That gives that radiation a very high accuracy in communication. Coherent terahertz radiation makes it possible to create a safe communication tool. The terahertz stripes can make the terahertz masers possible. And terahertz-radiation masers are tools that can make very highly accurate communication possible. 

Another interesting tool is the miniature laser system. That system can allow to making of new types of microchips. Things like drones can also communicate by using small lasers. Both terahertz-masers and miniature lasers can make it possible to create smaller drones than ever before. Those systems allow the drone to use radio-wave-based energy transmission. Those new drones can be like some kind of smoke that can travel in the air. 

"Deep ultraviolet solid-state laser with a compact setup generates a vortex at 193 nm wavelength. Credit: H. Xuan (GBA branch of Aerospace Information Research Institute, Chinese Academy of Sciences)"

"A new solid-state laser produces 193-nm light for precision chipmaking and even creates vortex beams with orbital angular momentum – a first that could transform quantum tech and manufacturing." (ScitechDaily,Scientists Create Compact Laser That Could Revolutionize Chipmaking and Quantum Devices)

Miniaturized laser technology is a tool that can make miniature drones to fight against things like bacteria. Those drones can capture and analyze any bacteria that they see. Laser systems can cut the DNA into the wanted bites. 

Miniature lasers can cut the cell organelles and that system makes it possible for the system to cut things like molecules at certain points. The nano-drones can operate in the nanotechnology factory. They can carry proteins and other things into the right places. That is one way to think about the nanofactory. The nanofactory can be the tank where the chemical environment is very accurately controlled. 

The nanomachines swim in the chemically controlled liquid. Then they can carry DNA or other molecules at certain points in the structure. Those nanomachines can look like smoke when they operate in that environment. The advanced AI and quantum computers control that nanorobot swarm. The robot swarm can get its energy from light or some radiowaves. In that system, the normal light can give energy to the nanomachine's photovoltaic cells. 


https://scitechdaily.com/scientists-create-compact-laser-that-could-revolutionize-chipmaking-and-quantum-devices/


 https://scitechdaily.com/tiny-stripes-unlock-powerful-terahertz-control-for-faster-data-and-sharper-scans/


https://en.wikipedia.org/wiki/Terahertz_radiation


perjantai 28. maaliskuuta 2025

New drones can listen to underwater communication.


"Researchers from Princeton and MIT developed a way to intercept underwater messages from the air using radar, overturning long held assumptions about the security of underwater transmissions. Credit: Princeton University/Office of Engineering Communications" (ScitechDaily, Not So Secure: Drones Can Now Listen to Underwater Messages)


"Cross-medium eavesdropping technology challenges long-held assumptions about the security of underwater communications." (ScitechDaily, Not So Secure: Drones Can Now Listen to Underwater Messages)

"Researchers from Princeton and MIT have developed a method to intercept underwater communications from the air, challenging long-standing beliefs about the security of underwater transmissions." (ScitechDaily, Not So Secure: Drones Can Now Listen to Underwater Messages)

"The team created a device that uses radar to eavesdrop on underwater acoustic signals, or sonar, by decoding the tiny vibrations those signals produce on the water’s surface. In principle, the technique could also roughly identify the location of an underwater transmitter, the researchers said." (ScitechDaily, Not So Secure: Drones Can Now Listen to Underwater Messages)


New MIT drones can use radars to see submarines. They can hear the underwater messages. Submarines are used to communicate with each other. Drones can also connect themselves to the submarine's hull. They can slip into the harbor and then connect themselves to submarines. 

Those drones can hear everything inside the submarine. This is why submarines should have a vacuum layer between the inner and outer hulls. That vacuum layer makes it harder to hear things. That the crew says in the submarine. That vacuum can also decrease the noise from its engines. 

Drones can endanger privacy in many ways. They can take images of buildings. They can hear what people say in their rooms. They can carry normal and laser microphones in the houses. They can see through clothes using IR cameras. Drones can also put sensors on the data cables they can take images on screens. Drones can carry plasma- and spectroscopic sensors that allow them to see things like chemical compounds of the fuel, by analyzing exhaust gas.

By shooting targets using low-power laser systems, the plasma sensor sees the chemical compounds of the materials. So those drones can see many things. That humans cannot see.

But drones can also hear underwater communication. They can use acoustic microphones (hydrophones) to hear things. Like acoustic messages from submarines. Those systems can also hear sounds from the submarine engines and propellers. There is a possibility that the underwater drone travels near the submarine and connects itself to the submarine's hull. That allows those drones to hear everything that people say in the submarine. New drones can operate airborne and underwater. And they can operate in many ways against the enemy. 

Advanced acoustic detectors can also use laser beams and radar systems to see how water molecules move. Those miniaturized systems can perform almost the same missions as manned helicopters. 

The underwater drone can also use things like hollow warhead detonators to damage the submarine's outer shell. Those small drones can make holes in the hull like torpedo tube hatches. And they can damage submarine communication masts. Drones can also have acoustic transmitters that uncover the submarine's positions. 


 https://scitechdaily.com/not-so-secure-drones-can-now-listen-to-underwater-messages/

The quantum effect allows us to research our minds and memories.

"A stunning discovery shows that quantum computation might be embedded in the very structure of life, enabling organisms to process inf...