11 October, 2011

The photosynthetic slug

Along the East coast of North America lives a small mollusc, the sea-slug Elysia chlorotica, which is able to embody chloroplasts belonging to the alga Vaucheria litorea, on which it feeds. This gastropod can use these organelles to survive for long periods without food, using sugars produced by the photosynthesis process. During the juvenile period this slug needs to feed on the alga, before chloroplasts are stably incorporated in his cells.


However, the most interesting fact about Elysia chlorotica is its ability to maintain chloroplasts alive for several months. This is due to the fact that it doesn't only embody the organelles, but it also integrates in the genome the alga's genes that support the chloroplasts in their photosynthetic function. This process is usually called "horizontal gene transfer", since it implies that an organism incorporates some genes of another organism without being part of its offspring, and is common in different bacteria species.

10 October, 2011

Warm-blooded reptiles?

Since the first fossils of dinosaurs have been found, many scientists agreed that these creatures had a metabolism similar to that of modern reptiles. This means that they were ectothermic (internal temperature equal to that of the external environment). However, other following discoveries led to consider the opposite hypothesis, the endothermy.

The major doubts about dinosaur's mechanisms of thermoregulation arise when we consider the species that lived at polar latitudes. In fact, there are different sites in Australia or Russia where it has been discovered a great variety of dinosaurs, which in some cases lived at temperatures close to zero degrees. As an example in Northern Russia, near Kakanaut, many fossils have been extracted from rocks dating back the Cretaceous, of both carnivores (troodontids, dromaeosaurids, tyrannosaurids) and herbivores (hadrosaurids, ankylosauria and others). There have even been found some hadrosaur eggs, which proves that these animals had a sedentary nature.

A recent research by Holly Woodward, Jack Horner and colleagues at Montana State University (published online on PLoS ONE), proved that dinosaurs who lived at polar latitudes weren't physiologically different from other species, on the contrary of what claimed an earlier study, influenced by the lack of a good number of finds.

The endothermy hypothesis could also be supported by the presence of feathers on many dinosaur species, especially on theropods like dromaeosaurids. This group includes the notorious Velociraptor, and is probably a parallel evolutionary line to that of birds, with whom they could share a common ancestor. Structures like feathers or hair are typical in endothermic animals and are important in thermoregulation, as they act insulating the body and making it less influenced by changes in the environmental temperature.

Another study, published on "Science" and guided by researchers of Bonn University and the California Institute of technology, discovered what the internal temperature of sauropods was. This enormous herbivores had a body temperature similar to that of modern mammals, between 36 °C and 38 °C. They analyzed the teeth of these creatures, which contain carbonates made by different isotopes of carbon and oxygen. Since the temperature at which this compound is formed influences the percentage of 13C and 18O that bind together within the tooth, the higher the temperature was, the lower the frequency of this bond was. Analyzing the quantity of these isotopes, they determined the body temperature of these animals.

All these discoveries, but many others too, seem to prove that the most diversified and successful group of reptiles had a physiology similar to that of mammals, a fact that probably allowed them to colonize almost all the available environments.

09 October, 2011

Thermonuclear fusion: the confinement of plasma [3]

Stars shine because of the conditions of high temperature and pressure of their nucleus. This is called gravitational confinement, but for technical reasons is not reproducible on Earth. To reach the required conditions, other forms of confinement have been proposed, which use higher temperature and lower pressure. Nowadays nuclear reactors are divided in two categories, Inertial and Magnetic confinement.

Inertial Confinement

In those reactors, plasma is obtained thanks to the use of high energy laser. Small spheres containing mixtures of Deuterium and Tritium are placed in a vacuum chamber.


Several laser rays hit those spheres, causing the evaporation of the plastic case called "Ablator". Deuterium and Tritium are shoved toward the geometrical center, reaching high density and temperature.


The reactor shown in figure is the National Ignition Facility, built in California in 2009. It is made of 192 lasers and at the moment it doesn't produce more energy than what it consumes. However, this project has just began and it could soon reach a great energetic efficiency.

Magnetic Confinement

Even though inertial confinement could soon reach a great energy efficiency, magnetic confinement seems to be closer to reach this aim. This reactor is based on the principle that plasma is composed by charged particles that are affected by Lorentz force.


The intensity of this force depends in fact on the speed of the particle and on its charge. It is equal to zero if the charge is zero as well, or the speed is parallel to the magnetic field. On the other hand, if they are perpendicular it is maximum. The direction of the force is going to be perpendicular to the speed and to the direction of the magnetic field: it won't be able to change the particle's speed, but only its direction. A perpendicular magnetic field would result in a circular trajectory, while all the other will lead to an helicoidal trajectory, described by the following equation:

According to those laws, a possible solution to confine a plasma is the use of a solenoid and a magnetic mirror. However, the efficiency of this geometry is not even close to the one of a toroid.


According to this model, several types of reactors have been proposed and tested. The most efficient one resulted to be the "Stellarator", which uses further helicoidal spins to the ones of the toroid. This different geometry allows, for instance, to eliminate the assial current necessary in any other toroid to create the poloidal field (as we can see in the previous figure).


The main problem related to "Stellarator" is its extreme complexity. This model is indeed only a theoretic proposal and no one has been built yet. More simple reactors are based on the "Tokamak" geometry, which is the most widely used nowadays.

08 October, 2011

Thermonuclear fusion: deep inside the heart of stars [2]

The process of Thermonuclear fusion requires high temperatures and high pressure. Those conditions until now  have been found only inside the hearth of stars. Here the elements are under the physical form of a ionized gas. It means that the  different particle have a certain electric charge, and so before nuclear interaction could begin to work, coulombian repulsion has to be defeated.



Considering the simplest situation, with two hydrogen atoms, the energy required to exceed the coulombian barrier is 1000 KeV. Inside a star the temperature is usually around 10^7 K. Heat energy of mono-atomic gas can be calculated as follow:
According to this data, nuclear reaction should be impossible at those condition. However three other factors can combine to bring a certain probability of success for those reactions:
  • Particles are characterized by a Maxwell speed distribution. It means that a certain amount of particles have a energy greater than the medium one, and a certain amount can reach the level required;
  • According to quantum mechanics there's a little probability that a particle with low energy could  exceed the coulombian barrier, by Quantum Tunnelling;
  • Stars are made by a large amount of particles. Even though the medium level isn't enough to exceed the barrier, a great amount of particles could have enough energy.

Nevertheless most of the energy produced by a star is related to the Proton-Proton chain, that occurs at 3x10^7 K. The reaction is basically the transformation of 4 proton in a nucleus of Helium, according to the following process:


The total energy produced by this process is 26 MeV per cicle, even though 0,26 MeV are emitted under  the form of particles known as Neutrino.


This cycle is prevalent in star only through the first period of their life. When temperature gets closer to 10 million degrees another chain takes place, and it is known as CNO chain (Carbon Nitrogen Oxigen).
This chain consists, like the previous one, in the transformation of 4 protons in an atom of Helium. In total the energy production is similar to the previous one, 25 MeV. So what are the factors that mark the difference between one chain and the other? In the CNO chain heavier atoms are used as catalyst. Because of that higher energies are require to exceed the coulombian barrier, and so also higher temperatures.

Anyhow the process that is fundamental for  the production of energy by thermonuclear fusion on earth is the p-p chain. The CNO chain in fact requires higher temperatures, too high for our actual technologies. Consequently the following article will always refer to this cycle.

The hidden twin of the Amazon River


In 2007 a group of brazilian geologists of the Coordenação de Geofísica do Observatório Nacional, guided by the Indian scientist Valiya Hamza discovered the world's longest underground river at a depth of 4 km under the Amazon river.


There is an enormous amount of water, flowing very slowly for 6000 km, from the Andes to the Atlantic Ocean. With a width between 100 and 200 kilometers, the river "Hamza" is probably the world's biggest groundwater. Hamza and his team analyzed datas coming from several oil wells dug by the Petrobras company, between 1970 and 1980.


The origins of this river is probably linked to the clash between the South-American plate and the one on which rests the Pacific Ocean. The peculiar porous and permeable rocks on the eastern ocean rim  allow water to flow towards greater depths. Then impermeable layers block the rise of water and the topography gradient (the angle of the slope on which current flows) leads it to flow along the same direction of the Amazon River.

Thermonuclear Fusion: beyond particle physics [1]

The process on which is base thermonuclear fusion is very simple, and concerns two light atoms, fusing in a heavier one. But how could this process take place?


If we consider two Hydrogen atoms, their nucleus consist of one proton. This means that they will be affected by a gaining repulsive electromagnetic force while getting closer. According to Coulomb law this force is:

If this was the only force holding together the nucleus, it couldn't even exist. Gravity on the other hand is attractive, but it is not strong enough. In fact if we compare it to electromagnetic force, we'll see that it is 10^39 times stronger.


The standard model of particle physics postulates the existence of two more forces:  weak and strong nuclear forces. Those kind of forces do not affect every particle, and their radius of effect is very little: 10^-15 for the strong one and 10^-18 for the weak one. This explains why those forces are not familiar to us, and don't have any noticeable effect on macroscopic world. 



The standard model proposed also that every force has a quantum mediator, represent by a certain particle. For instance electromagnetic force is mediated by photons, while gravity is supposed to be mediated by gravitons. We'll later focus on strong nuclear force, fundamental for thermonuclear fusion, and their mediators gluons, but before we need to explain what are those particles.


Particles are divided in several categories:
  • Leptons: those particles are fundamental. It means that they aren't made up by other particles and aren't affected by strong nuclear force, but only by the weak one. Are divided in three other categories: electron, muon, tau. Everyone of them has a corresponding neutrino.
  • Hadrons: those are massive particles, affected by all the 4 fundamental forces. Nowadays are known more than a hundred of those particles, even though proton is the only stable one. Are made up by quarks, and are divided in two further categories, Baryons an Mesons

If we take in consideration Pauli exclusion principle, according to which:
"Two fermions cannot occupy simultaneously the same quantum state"
We have to divide particles in other two categories:
  • Fermions: particles with one half spin, obeying to Pauli principle;
  • Bosons:  particles with integer spin, that do not obey to Pauli principle. Are also known as quantum mediator.
Strong force is based for instance on Bosons called Gluons, that obey according to the laws of quantum chromodynamics. Unlike the electrically neutral photon of quantum electrodynamics (QED), gluons themselves carry color charge and therefore participate in the strong interaction in addition to mediating it.


Barions are composed of 3 quarks with certain color charge. The various possible combinations determinate which particles gets formed. This color charge is not static, and changes continuously. This phenomena allows the creation of strong interactions between Hadrons, and therefore their coexistence inside the nucleus.


Strong nuclear force allows the creation of atomic nucleus. It is deeply related to binding energy, which is defined as:
"The energy required to break the bond between protons and neutrons inside the nucleus."
This energy is different for every nucleon, because depends on mass number. It increases with mass number for elements lighter than iron, and then decreases. The fusion of two lighter atoms will then lead to the formation of a heavier and more energetic atom, allowing the liberation of a certain amount of energy.




The mass of the resulting atom will be in fact lower than the sum of the two lighter ones. According to Einstein's special relativity this mass is transformed in energy.

07 October, 2011

Possible solutions to energy crisis

Through the last decades, the problem related to energy production has reached very concerning levels. The progressive depletion of coal reserves requires the development of further sources of energy production. Nowadays, the main alternatives that have been identified are:

  • Nuclear Power;
  • Wind;
  • Solar thermal;
  • Solar photovoltaic;
  • Geothermal;
  • Hydroelectric.
The problem is that none of them could compete with fossil fuel, because of their low efficiency and expensiveness. The only plausible solution at the moment seems to be nuclear power.


Energy produced by nuclear fission can be indeed considered the most efficient form of energy at the moment, but only if we do not consider long term cost. Nuclear waste need long periods of secure storage to avoid any environmental risk. Their hazard leads to a raise of costs related to the production of this kind of energy, that will eventually make this energy source inconvenient.

Even though on short-term its efficiency is the highest, it necessarily decreases with time because of the storing of nuclear waste. Nuclear debt has resulted to be a great problem, bothering the financial budget of several countries.

Another problem is related to the supply of uranium. Like coal, also uranium is a fossil fuel, and this means that it wont last forever. In this field many theories have been proposed: someone says it's going to last for 40 years, someone else for 400 years and so on, but this is not the main matter. Uranium has limited supply, and this means that it cannot be a definitive solution.

However, recently another possible form of energy has been proposed, and it's the one that allows the sun to shine. We are talking about Thermonuclear Fusion.

06 October, 2011

From raw materials to nanotechnology

Nowadays many people know what a CPU is, but only few of them can describe how to build millions of microscopic transistors and place them in order to make them work together.


From sand to “ingots”…

Sand is a very common material on Earth’s surface and it is composed by 25% of Siliceous.

The first thing to do is to separate siliceous from sand. This purification process requires several passages to grant a high level of purity (the maximum accepted is 1 atom per million).Purified Siliceous is then melted to obtain a pure Siliceous crystal called “ingot” that weights 100 kg and is 99,9999% pure.






From “ingots” to “wafers” and first treatments…



The ingot is then cut in many thin discs called “wafers”, which are thoroughly cleaned in order to eliminate every defect.
Generally modern CPUs are obtained from 300 mm diameter wafers.







A blue photoresist liquid is then distributed on their surface in order to prepare them to the next step. During this phase, the wafer is constantly turned around itself in order to grant an uniform liquid distribution (as we can see in the picture).









The wafer is then exposed to UV radiations, which are partially filtered by a mask (as we can see in the picture). The parts which have been exposed to UV ray become soluble. Through the mask, it is possible to give a precise shape to the Siliceous and through a lent it is possible to obtain a smaller shape on the wafer than the one on the mask.
In fact, the shape on the mask is bigger than the real one on the wafer; this procedure enables to create microscopic transistors from bigger molds.





From “wafers” to transistors…


In the picture we can see a transistor’s enlarged image. A transistor works as an interrupter, as it is able to control the electric current inside itself.
Nowadays transistors are so small that it’s possible to place 30 millions on the head of a pin.
The areas exposed to UV rays are now dissolved and eliminated using a specific chemical solvent. This is the first step to the CPU building. Those which haven’t been exposed to UV rays are now covered by a photoresist, which protects the ones to be preserved. The photoresist film is eventually removed and the outcome is similar to the one in the picture.


Subsequently, a new photoresist film is applied and the transistor is exposed again to UV rays and to a new washing phase. This step is called “ion doping”, because it consists on the ionized particles exposition, which modifies chemical properties of siliceous. This is fundamental to create the necessary properties for CPUs.
The next step consists of an ion bombing on the exposed areas of the wafer and it’s called “ion implantation” (ions are shot at 300.000 km/h). After that ions are planted in siliceous altering its chemical properties.
Photoresist layer is removed after ions bombing and the exposed material (the green one) now contains new atoms.


Now the transistor is almost finished and three holes are applied on the isolation layer (magenta) on the transistor. These holes will be covered by copper, which is fundamental to link each transistor with the other ones.





Copper ions are settled on the transistor (the “elettroplanting”). The wafer is then plunged in a CuSO4 solution, which is subjected to an electric field. So, copper ions move from the positive terminal (anode) to the negative one (cathode).

The outcome is a thin copper layer on the wafer and the excess material is removed.








Several metal layers compose “think wires” between transistors, which are extremely important for CPU
structure and functions.
If we watch a CPU by naked eye, we’ll probably observe that it’s very thin, but if we watch it through a microscope we acknowledge that it has 20 transistor’s circuit layers.





From transistors to “dies”…

Now the CPU building is complete, but the product has to be tested: every transistor and circuit is controlled in order to check the right functioning of the whole system.



If its quality has been granted, the wafer is cut in different unities called “die”. Properly functioning dies are taken and the other ones are tossed.

Now the die is placed in its case with a heatsink and...



a new CPU is born!

Educated cities

Considering the trend of growth of several cities, we'll see that it is strongly related to the historical period and the technological development of its nation. For example many cities, such as Detroit, Liverpool, Cleveland and St. Louis, are nowadays half the size they where in 1950. This decline happened in several cases in conjunction with the exponential growth of other cities. So what are the main factors that could shape the destiny of a city?

The main factor is related to the development of technology: the cities i have referred to before were mainly industrial cities, and this was the cause of their decline in the second half of the 20th century. The development of technology and communication systems lead to a shift of demand, that went from industrial and production sector, to services' one.


However not all cities accepted their decline, and many managed to reinvent themselves. For instance Seattle is now considered one of the capitals of the information sector, thanks to companies such as Microsoft, Amazon, AT&T and T-mobile.

On the other hand, more and more cities are basing their success on education. Boston and New York appear to be the greatest representatives of this trend, and their soaring success seems to confirm that this could be the most prolific sector for a city. Indeed, the presence of colleges in a metropolitan area in 1940 is associated with higher earnings and faster growth today. High education could be the only way to grant growth on long terms to a city.


Higher education leads to more specialized jobs and to higher wages. A metropolitan area characterized by a raised level of education implies the creation of several sectors in which local residents are specialized. The creation of a new skilled ruling class, able to face with required abilities, could eventually lead to a further development of a city.

On long term planning, the only possible foundation of growth could be instruction. Skills, not structures are the most effective way to avoid urban failure.

Survivors

Since the seventeenth century, over 90% of extinctions were caused by man. The main causes are overexploitation of environments, hunting, pollution and competition with alien species introduced in their environment by man.. Here follows a very short list of endangered species, which includes some of the most extraordinary creatures living on earth.

The white shark (Carcharodon carcharias) is a fish belonging to the Lamniformes order, last representative of the Carcharodon genus. Hunting for commercial purposes and fishing of the species on which it feeds are the main threats to the survival of the largest living predatory fish. According to a recent estimate, there are still about 3500 specimens distributed in the oceans of world.

The tiger (Panthera tigris) is the largest living felid, comparable in size to the ancient big felids. It is a formidable predator placed at the top of the food chain, but there are now only about 3000 specimens, mainly because of hunting.

The most endangered felid is probably the Iberian lynx (Lynx pardinus), whose number of individuals is less than 100. It would be the first felid to become extinct since Smilodon populator about 10.000 years ago.

The mountain gorilla (Gorilla beringei beringei) is a primate whose conservation status is classified by the IUCN (International Union for Conservation of Nature) as "critically endangered", the level preceding the extinction in the wild: a recent census estimated a population of about 800 individuals. Another endangered member of the Hominidae family is the Sumatran orangutan (Pongo abelii); its population includes about 7000 specimens.

Many species of rhinoceros are considered critically endangered, too; their survival is threatened mainly by hunting. For example, there are only about 3600 black rhinoceros (Diceros bicornis), 275 Sumatran Rhinoceros (Dicerorhinus sumatrensis) and only few tens of Javan Rhinoceros (Rhinoceros sondaicus).



This list, unfortunately, could go on for pages and pages.


05 October, 2011

Homeopathy: myths and legends

The German physician Samuel Hahnemann started this branch of alternative medicine in 1796. It is based on highly diluited solutions, that have been found to be no more effective than common placebos. The basic idea of this alternative medicine is that "like kills like". It is also known as the law of similarities, according to which "whatever causes your symptoms, will also cure them".




Hahnemann also proposed that the effects of those treatments could be improved by diluting them in water, according to the "law of infinitesimal". If we consider caffeine, to improve its power diluting a single drop in 99 drops of water. By taking a drop of this solution and diluting it in other 99 drops of water we'll get a 2C solution, that is 99,99% water and 0,01% caffeine. Homeopathic remedies are usually sold between 6C and 30C dilutions, in little balls of sugar. It's fundamental to underline that at 12C we reach the Avogadro Limit, where there is a big probability that nothing is left of the original substance.


Furthermore Hahnemann proposed another theory, also known as the "law of succussion", according to which shaking the homeopatic remedy would increase its potency. In fact this process would allow water to retain memories of the original substance, because of its vibration. The idea that water has the ability to remember whatever has been dissolved before dilution was proposed by Jacques Benveniste. For this theory he was awarded two times the IGNOBEL prize



Thus homeopathy cannot be considered as a real science, and cannot compete with real medicine. The problem is that nowadays many people still use it, even though it has been proved to be useless. This isn't such a big deal if the treatments are used to cure a common cold or flu, but problems arise if it is used for more serious diseases. In the first situation it cannot cause great damages, because the disease will be defeated as well, even though it could take more time. If it is used as a replacement of real medicine in case of more severe diseases, it could eventually lead to death because of the loss of real treatments.


However it's important to underline that this pseudo-medicine has no medical effect at all. Sometimes people are healed by those treatments, but the ratio is not different from the ones of placebo effect. It could indeed be a useful treatment for psychosomatic diseases. In those cases, in fact, just the idea of taking a certain medicine could be useful to cure them. The main point is that every disease that can be cured by homeopathy could be cured by placebo effect, too, because the effective chemical an physical action are equal.


Nevertheless no one would ever even think about curing cancer by placebo effect....