Saturday, 10 September 2016

                                    Quantum Theory

Planck’s radiation law
By the end of the 19th century, physicists almost universally accepted the wave theory of light. However, though the ideas of classical physics explain interference and diffraction phenomena relating to the propagation of light, they do not account for the absorption and emission of light. All bodies radiate electromagnetic energy as heat; in fact, a body emits radiation at all wavelengths. The energy radiated at different wavelengths is a maximum at a wavelength that depends on the temperature of the body; the hotter the body, the shorter the wavelength for maximum radiation. Attempts to calculate the energy distribution for the radiation from a blackbody using classical ideas were unsuccessful. (A blackbody is a hypothetical ideal body or surface that absorbs and reemits all radiant energy falling on it.) One formula, proposed by Wilhelm Wien of Germany, did not agree with observations at long wavelengths, and another, proposed by Lord Rayleigh (John William Strutt) of England, disagreed with those at short wavelengths.


In 1900 the German theoretical physicist Max Planck made a bold suggestion. He assumed that the radiation energy is emitted, not continuously, but rather in discrete packets called quanta. The energy E of the quantum is related to the frequency ν by E = hν. The quantity h, now known as Planck’s constant, is a universal constant with the approximate value of 6.62607 × 10−34 joule∙second. Planck showed that the calculated energy spectrum then agreed with observation over the entire wavelength range.
Einstein and the photoelectric effect

In 1905 Einstein extended Planck’s hypothesis to explain thephotoelectric effect, which is the emission of electrons by a metal surface when it is irradiated by light or more-energetic photons. The kinetic energy of the emitted electrons depends on the frequency ν of the radiation, not on its intensity; for a given metal, there is a threshold frequency ν0 below which no electrons are emitted. Furthermore, emission takes place as soon as the light shines on the surface; there is no detectable delay. Einstein showed that these results can be explained by two assumptions: (1) that light is composed of corpuscles or photons, the energy of which is given by Planck’s relationship, and (2) that an atom in the metal can absorb either a whole photon or nothing. Part of the energy of the absorbed photon frees an electron, which requires a fixed energy W, known as the work function of the metal; the rest is converted into the kinetic energy meu2/2 of the emitted electron (me is the mass of the electron and u is its velocity). Thus, the energy relation is 
If ν is less than ν0, where hν0 = W, no electrons are emitted. Not all the experimental results mentioned above were known in 1905, but all Einstein’s predictions have been verified since.
Bohr’s theory of the atom
BRITANNICA LISTS & QUIZZES
A major contribution to the subject was made by Niels Bohr of Denmark, who applied the quantum hypothesis to atomic spectra in 1913. The spectra of light emitted by gaseous atoms had been studied extensively since the mid-19th century. It was found that radiation from gaseous atoms at low pressure consists of a set of discrete wavelengths. This is quite unlike the radiation from a solid, which is distributed over a continuous range of wavelengths. The set of discrete wavelengths from gaseous atoms is known as a line spectrum, because the radiation (light) emitted consists of a series of sharp lines. The wavelengths of the lines are characteristic of the element and may form extremely complex patterns. The simplest spectra are those of atomic hydrogen and the alkali atoms (e.g., lithium, sodium, and potassium). For hydrogen, the wavelengths λ are given by the empirical formula 
where m and n are positive integers with n > m and R, known as theRydberg constant, has the value 1.097373157 × 107 per metre. For a given value of m, the lines for varying n form a series. The lines for m = 1, the Lyman series, lie in the ultraviolet part of the spectrum; those for m = 2, the Balmer series, lie in the visible spectrum; and those for m = 3, the Paschen series, lie in the infrared.
Bohr started with a model suggested by the New Zealand-born British physicist Ernest Rutherford. The model was based on the experiments ofHans Geiger and Ernest Marsden, who in 1909 bombarded gold atoms with massive, fast-moving alpha particles; when some of these particles were deflected backward, Rutherford concluded that the atom has a massive, charged nucleus. In Rutherford’s model, the atom resembles a miniature solar system with the nucleus acting as the Sun and the electrons as the circulating planets. Bohr made three assumptions. First, he postulated that, in contrast to classical mechanics, where an infinite number of orbits is possible, an electron can be in only one of a discrete set of orbits, which he termed stationary states. Second, he postulated that the only orbits allowed are those for which the angular momentum of the electron is a whole number n times (h/2π). Third, Bohr assumed that Newton’s laws of motion, so successful in calculating the paths of the planets around the Sun, also applied to electrons orbiting the nucleus. The force on the electron (the analogue of the gravitational force between the Sun and a planet) is the electrostatic attraction between the positively charged nucleus and the negatively charged electron. With these simple assumptions, he showed that the energy of the orbit has the form
where E0 is a constant that may be expressed by a combination of the known constants eme, and . While in a stationary state, the atom does not give off energy as light; however, when an electron makes a transition from a state with energy En to one with lower energy Em, a quantum of energy is radiated with frequency ν, given by the equation
 Inserting the expression for En into this equation and using the relation λν = c, where c is the speed of light, Bohr derived the formula for the wavelengths of the lines in the hydrogen spectrum, with the correct value of the Rydberg constant.

Bohr’s theory was a brilliant step forward. Its two most important features have survived in present-day quantum mechanics. They are (1) the existence of stationary, nonradiating states and (2) the relationship of radiation frequency to the energy difference between the initial and final states in atransition. Prior to Bohr, physicists had thought that the radiation frequency would be the same as the electron’s frequency of rotation in an orbit.

Monday, 15 August 2016


SCIENCE AND TECHNOLOGY

Religion
From the beginning of time, religion has been considered as the panacea for all ills, and mankind, despite moment of doubt, has always leaned on religious faith for solace.
Religion has been defined as a system of faith and worship in practice within a group of people living in a community. Belief in god or Supreme Being is the basic premise of all religions and even those who worship Nature bow down to a super power who make all the marvels of nature possible. Thousands of believers have pursued with single mindedness, the path of devotion for future salvation.
By religious behavior human seeks to adopt to cope with, or understand    dimensions of life beyond their explanations or control. These manifestations differed according to place and time. The simple folk never really doubted the existence of god.
Unfortunately although a number of people believe in him vaguely, each set of believers had its own version of God. They had their own theories of what he looked like and what he said to them. The most effective way to identify religious faith in society had always been through the ritualistic expressions. When divisions began to appear in these ritualistic expressions, conflict between various practitioners of faith resulted.
Religion has played a phenomenal role in shaping our history from the ancient to the modern times. When the age of reasoning succeeded the age of faith, God was temporarily buried and people wondered if human reason was so powerful did men need God?
Critical thinkers in 19th and 20th centuries began to say that religion is the opium of the people. According to Karl marks, religion suppressed social change. Darwin’s theory blew up basic religious tenets. Karl Marks saw close links between the ruling class and the heads of religions and was eager to blow up both. God was dead, announced Nietzsche.
Traditional function of religion seems to have been one of providing a system of meaningful interaction by defining taboos or reinforcing rules without which society could disintegrate. Young people today are perplexed looking for a fresh concept of faith which will give them freedom, side by side with stability.
E. M. Forster asserts that tolerance, good temper, and sympathy- they are what matter really, and if the human race is not to collapse, they must come to the front before long. The function of our universities is to produce in the students the quality of compassion for the suffering humanity and the quality which enables the individuals to treat one another in a truly democratic spirit.

Economic growth
Scientific discoveries and the consequential technological changes have completely revolutionized the life style and living standards of people.
Since the advent of industrial revolution, different periods have been marked by advances in different clusters of inventions. The first wave of invention that lasted for 60 years beginning 1785 was marked by progress in water, power, textiles and iron. The second wave lasted for 55 years between 1845 and 1900 and this was propelled by inventions in rail and steel. The third wave beginning 1900 and going up to the end of the first half of the century was marked by inventions in electricity, chemicals and internal combustion engine. The fourth wave was powered by oil, electronics, aviation and mass production. India is in the midst of fifth wave dominated by semiconductors, fiber optics, genetics and software.

Strides in the 20th century
The century opened on a bright note—with the electric powered lamp. Science then advanced at supersonic speed.
The automobile rolled out, the airplane took off, and man in a great leap conquered space. Information technology made possible a global village and artificial intelligence opened new windows to cyberspace. Man played God to create and to destroy. He split the atom to destroy his brothers and cloned beings to create a brave new world.

Information Technology
Information technology (IT), which comprises electronic computer technology and telecommunication technology, has in a few decades changed our society. Behind this development lies an advanced scientific and technical development originating from fundamental scientific inventions.
Information technology has been in the process of bringing about openness, networking, democratic functioning and social transformation. Technology is changing societies across the globe in terms of work, education thought process and overall work and life style. It brings transparency, responsibility, accountability and better social justice.
The rapid development of electronic computer technology started with the invention of the integrated circuit (IC) around 1960 and the microprocessor in 1970s; when the number of components on a chip became sufficiently large to allow the creation of a complete microcomputer.
Chip development has been marked to be equally dynamic and powerful development in telecommunication technology. Just as the IC has been and is a prime mover for electronic computer technology, ultra-rapid transistors and semiconductor lasers based on hetero structures of semiconductors are playing a decisive part in telecommunication.
The invention of the transistor just before1947 is usually taken to mark the start of the development of modern semiconductor technology (Nobel prize in physics 1956 to William B Shockley, John Bardeen and Walter H Brattain). With the transistor there came a component that was considerably smaller, more reliable and less energy consuming than the radio valve, which thus lost its importance.
In the beginning of the 1950s there were ideas and thoughts about manufacturing transistors resisters and condensers in a composite semiconductor block, an IC. The IC is more a technical invention than a discovery. However it is evident that it embraces many physical issues. One example is the question of how aluminum and gold, which are part of an IC, differ regarding their adhesion to silicon. Another question is how to produce dense layers that are only a few atoms thick.

THE TRANSISTOR ERA
M. J. Kelly director of research of Bell Laboratories had the fore-sight to recognize that reliable, expanded telephone communication required electronic, rather than electro-mechanical switching and better amplifiers. He formed a solid-state research group consisting of theoretical and experimental physicists.
The Transistor was born on 16-12-1947.
In 1951, three years after the discovery of amplifier in solid, transistors were produced commercially. Silicon transistor was produced in 1954 by Texas. By 1961, Texas and others commercially produced ICs in USA.

  1. discrete transistors       
  1. small scale integration        <100 components
  1. medium scale integration    100 to 1000 components
  1. large scale integration        1000 to 10000 components
  1. very large scale integration    >10000 components           

Junction field effect transistor (JFET) was produced by Terzner in France in 1958.Metal oxide semiconductor field effect transistor (MOSFET) by Bell laboratories in 1960. Operation amplifier (a709) ICs were produced in 1964.
ICs have made the marriage of communication and computation possible-the digital signal processing.
The first microprocessor the Intel 4004 was launched in 1971. It contained 2300 transistors and ran at 0.1 MHz. in the early eighties, at the dawn of the PC era, the clock speed of a PC’s processor was 5 MHz. Fifty years on, we are surrounded by millions of transistors – in radios, television, telephone and computers.

THE PERSIONAL COMPUTER (PC)
California—the first Silicon Valley in USA; started in 1970-the experimental PC. The 1975 saw an Altair 8800 PC. But fully developed PC of Intel hardware and Microsoft software- emerged in 1981 only. Within two decades from 1981 to 2001, one billion PCs were sold all over the world.

INTERNET BACKGROUND
Advanced Research Project Agency (ARPA) was launched in USA around 1969 to set up a pocket switched network consisting of a subnet and host computers.  By 1974 ARPA invented a model of protocols known as TCP/IP for data communication over internetwoks. The TCP/IP model and protocols were specifically designed to handle communication over internetworks. By 1983 ARPANET was stable and successful. By 1984 NSF decided to build a backbone network. By 1990 internet was born in USA 3000 networks and 200,000 computers.  In 1992 internet society was formed. By 1995 there was exponential growth of internet services throughout the world.
TCP/IP reference model and TCP/IP protocol stack makes universal service possible and can be compared to telephone system.

INTERNET
Internet is a network of connections through which information from one point can be transmitted to another; in a way it is quite similar to the network of roads which facilitate movement of vehicles from one place to another. In road transport, a highway is a rather wide road unencumbered by obstacles so that a vehicle can move on them at very high speeds. Information super highways are similar connections that permit communication of digital information at very high speeds.
The rules of communication are often referred to as protocols. When a message is sent through internet, it is not transmitted through a dedicated line, as is the case with telephone. Instead the message is broken up into pieces (pockets) using a transmission protocol and the internet protocol assigned to each pocket, its distinctive identification, which includes the address of the sender as well as the receiver. The message is then re-assembled at the received end.
The transmission control protocol (TCP) breaks up the information sent on the internet, each containing 1-150 bytes. It numbers each of the units, puts each into a pocket and thus helps to send it over the network. Internet protocol (IP) governs the way these pockets are addressed and routed along the internet. Thus the various pockets that comprise a message may travel a different route and take a different time to arrive at its destination. Some may even get damaged on the way. At the recipient’s end TCP extracts the data from each pocket, checks for its accuracy, and reassembles them into their original order. If it finds that any data are lost or damaged, it requests the sender computer to transmit them again. Thus these protocols (TCP/IP) really make communication through the internet possible.
A machine (PC) is on the internet if it runs the TCP/IP protocol stack, has an IP address, and has the ability to send IP pockets to all the other machines on the internet. Internet had four main applications: E mail, news, remote login and file transfer.

Internet as a global information system
Transmission control protocol (TCP) and the internet protocol (IP) – these protocols are usually lumped together as TCP/IP and are embedded in the software for operating systems.
Servers
Servers are computers dedicated to the purpose of providing information to the internet. They run specialized software for each type of internet application. These include e-mail, discussion groups, long distance computing and file transfers.
Routers
Routers are computers that form part of the communication net and that route or direct the data along the best available paths into the networks.
The network architecture is referred to as TCP/IP. The data are transmitted in pockets. Many separate functions are to e performed in pocket transmission; such as pocket addressing, routing, and coping with pocket congestion.
Internet protocol (IP)
In this layer, the pockets of information are passed along the internet from router to router and to the host stations. No exact path is laid out before hand and the IP layers in the routers must provide the destination address for the next leg of the journey so to speak. This destination address is part of the IP header attached to the pocket. The source address is also included as part of the IP header. The problems of lost pockets or pockets arriving out of sequence are not a concern of the IP layer.
Transmission control protocol (TCP)
With TCP, information is passed back and forth between transport layers, which control the information flow. This includes such information as the correct sequencing of the pockets, replacement of lost pockets and adjusting the transmission rate of pockets to prevent congestion. The TCP layer is termed connection oriented, because sender and receiver must be in communication with each other to implement the protocol.
All TCP connections are full duplex and point to point. Every byte on a TCP connection has its own 32-bit sequence number. Sending and receiving TCP entities exchange data in the form of segments. The TCP protocol has to address the following:
  1. the TCP segment header
  2. TCP connection management
  3. TCP transmission policy
  4. TCP congestion control
  5. TCP time management.

TCP link
A virtual communication link exists between corresponding layers in the network. The send and receive layers have buffer memories. The receive buffer holds incoming data while they are being processed. The send buffer holds data until they are ready for transmission. It also holds copies of data already sent until it receives an acknowledgement that the original has been received correctly.
The receive window is the amount of receive buffer space available at any given time. This changes as the received data are processed and removed from the buffer. The receive layer sends an acknowledgement signal to the send TCP layer when it has cleared data from its buffer, and the acknowledgement also provides an update on the current  size of the received window, and so on.


IP Address
Every host and router on the internet has as IP address, which encodes its network number and host number. The combination is unique: no two machines have the same IP address. All IP addresses are 32 bit long and are used in the source address and destination address fields of IP pockets. The 32 bit numbers are usually written in dotted decimal notation. Example, the hexadecimal address C0290614 is written as 192.41.6.20. The lowest IP address is 0.0.0.0 and the highest is 255.255.255.255.
Subnet
Subnet means the set of all routers and communication lines in a network. Each router has a table listing some number of network IP addresses and some number of host IP addresses. The first kind tells how to get to distant networks. The second kind tells how to get to local hosts. When an IP pocket arrives, its destination address is looked up in the routing table. If the pocket is for a distant network, it is forwarded to the next router on the interface given in the table. If it is a local host, it is sent directly to the destination. If the network is not present, the pocket is forwarded to a default router with more extensive tables. Subnet reduces router table space by creating a three level hierarchy.

Necessity of Modem
Attenuation and propagation speed are frequency dependent. Square waves in digital data have a wide spectrum and thus are subject to strong attenuation and delay distortion. These effects make base band (DC) signaling unsuitable except at slow speed and over short distances. To get around the problems associated with DC signaling, especially on telephone lines, AC signaling is used. A continuous tone in the 1000 to 2000 Hz range is introduced. Its amplitude, frequency, or phase can be modulated to transmit information.
Internet services

Telephone companies and others have begun to offer networking services to any organization that wishes to subscribe. The subnet is owned by the network operator, providing communication service for the customers’ terminals.

Sunday, 27 March 2016


Industrial revolution

Watt’s rotary steam engine was being perfected just at the same moment that iron-working improved and textile inventions were becoming more powerful, greater in size, sizeable and in need of better, cheaper, and more reliable power sources. The new steam engine could be harnessed to all these new inventions. In 1782, the year after Watt perfected the rotary steam engine, there were only two cotton mill factories in Manchester. Twenty years later there were more than 50.

INNOVATION AND INDUSTRIALIZATION
The textile industry, in particular, was transformed by industrialization. Before mechanization and factories, textiles were made mainly in people’s homes (giving rise to the term cottage industry), with merchants often providing the raw materials and basic equipment, and then picking up the finished product. Workers set their own schedules under this system, which proved difficult for merchants to regulate and resulted in numerous inefficiencies. In the 1700s, a series of innovations led to ever-increasing productivity, while requiring less human energy. For example, around 1764, Englishman James Hargreaves (1722-1778) invented the spinning jenny (“jenny” was an early abbreviation of the word “engine”), a machine that enabled an individual to produce multiple spools of threads simultaneously. By the time of Hargreaves’ death, there were over 20,000 spinning jennys in use across Britain. The spinning jenny was improved upon by British inventor Samuel Compton’s (1753-1827) spinning mule, as well as later machines. Another key innovation in textiles, the power loom, which mechanized the process of weaving cloth, was developed in the 1780s by English inventor Edmund Cartwright (1743-1823).
Developments in the iron industry also played a central role in the Industrial Revolution. In the early 18th century, Englishman Abraham Darby (1678-1717) discovered a cheaper, easier method to produce cast iron, using a coke-fueled (as opposed to charcoal-fired) furnace. In the 1850s, British engineer Henry Bessemer (1813-1898) developed the first inexpensive process for mass-producing steel. Both iron and steel became essential materials, used to make everything from appliances, tools and machines, to ships, buildings and infrastructure.
The steam engine was also integral to industrialization. In 1712, Englishman Thomas Newcomen (1664-1729) developed the first practical steam engine (which was used primarily to pump water out of mines). By the 1770s, Scottish inventor James Watt (1736-1819) had improved on Newcomen’s work, and the steam engine went on to power machinery, locomotives and ships during the Industrial Revolution.
Chemicals for Textiles
 Output of wool and cotton cloth grew substantially in the late 18th and early 19th centuries as a result of the mechanisation of the textile industry and the needs of the expanding population.
 In the earlier days the cleansing and bleaching of cloth was achieved by the processes of bucking (soaking in alkali for a week), souring (soaking in buttermilk for a week), and crofting (exposing the cloth for several weeks to sunshine and rain in bleachfields on south-facing slopes).
In the late 18th century sulphuric acid for souring, and chemical bleaching (initially using 4 chlorine in caustic alkali and, later, bleaching powder) came to be used; the use of chemicals speeded up the whole process considerably and reduced the amount of working capital tied up in unfinished goods. (‘Bleachfield’ on the University of York campus was a crofting site and by about 1850 a bleachworks stood there).
 In addition to the direct use of alkali more was needed for the manufacture of soap (production of which, mainly for textile use, rose from about 1500 tons in 1785 to over 50000 tons in 1830). Still more alkali was needed for glass manufacture, production of which for windows in housing increased as a further consequence of the population explosion.
The Leblanc Process
The process was a messy batch process. Salt was treated with sulphuric acid; the resulting ‘salt cake’ (sodium sulphate) was mixed with limestone and coal (or, better, coke) and roasted to produce ‘black ash’ – an impure mixture of sodium carbonate and calcium sulphide: 2NaCl + H2SO4 J Na2SO4 + 2HCl Na2SO4 + CaCO3 + 2C J Na2CO3 + CaS + 2CO2 The sodium carbonate was extracted with water and the solution was evaporated to dryness in open pans; if necessary for higher purity (e.g. for glass manufacture), the product was recrystallised. The operation of the Leblanc process was environmentally noxious. In the early days the acid fumes from the initial stage was vented to the atmosphere, and the smelly residual wet sludge from the black ash extraction stage was dumped. The emission of HCl fumes was a nuisance to neighbours in spite of the palliative use of tall chimneys (as high as 145 m at St Rollox), and litigation was frequent. Gossage introduced scrubbing towers in 1836 and they were increasingly used to absorb the descending water streams. Their use became general after the passing of the 1863 Alkali Act which made the absorption of at least 95% of the acid fume obligatory; the Act also set up an Alkali Inspectorate to enforce the measure. Initially the acid absorbate was often discharged to rivers but it came to be recognised as a useful source of chlorine for absorption in lime (CaO) to make bleaching powder, a product 5 introduced by Tennant in 1799. The chlorine required for this purpose was released from the hydrochloric acid solution by heating with the mineral pyrolusite (MnO2) 4HCl(aq) + MnO2 J Cl2 + MnCl2 + 2H2O Partially successful efforts were made by Gossage as early as 1837 to regenerate the scarce manganese dioxide by: 2MnCl2 + 2Ca(OH)2 + O2 J 2MnO2 + 2H2O + 2CaCl2 But it was not until the 1860s that the recovery process was perfected by Weldon who used excess lime. Also in the 1860s the Deacon process for catalytic oxidation of gaseous HCl and Cl2 (using CuCl2 catalyst) came into use. Using these processes the manufacture of bleaching powder as an adjunct to alkali manufacture became firmly established in the 1860s and bleachfields disappeared. The dumping of the sulphide sludge was not only environmentally offensive, it represented the total loss of sulphur from the sulphuric acid produced by the lead chamber process which played a key role in the alkali industry. However, effective recovery of sulphur from the sulphide waste lay in the future. Originally the sulphur came from Sicily but in 1838 the price of the raw material doubled owing to monopolistic behaviour; within a very short time the mineral pyrite (FeS2) was substituted; it was roasted in air to generate sulphur dioxide: 4FeS2 + 11O2 J 8SO2 + 2Fe2O3 And the iron oxide byproduct was disposable to iron works. Another source of sulphur of importance later on was ‘spent oxide’ from gasworks which could also be used in the pyrite burners.

The Ammonia-Soda Process
 The process, essentially involving the reaction of carbon dioxide with an ammonia-saturated solution of salt, was first proposed by Fresnel (better known for his work on optics) in 1811. Various attempts were made in Britain (Thom in Scotland 1836, Muspratt in the 1840s, Deacon 1856) to achieve a workable process but all were on a small scale and none was really successful. The effective establishment of an economic, large-scale process was achieved in Belgium in 1865 by Solvay who overcame the engineering problems of gas handling and absorption. A licence for the exclusive operation of the process in Britain was acquired in 1872 by Mond who, with Brunner, started a works in Cheshire in 1874. In the meantime some variants of the Solvay process were also established in England but were later taken over and shut down by Brunner, Mond & Co. The full process comprised a number of stages: CaCO3 J CaO + CO2 (1) CaO + 2NH4Cl J CaCl2 + 2NH3 + H2O (2) 2NH3 + 2H2O + 2CO2 J 2(NH4)HCO3 (3) 2(NH4)HCO3 + 2NaCl J 2NaHCO3 + 2NH4Cl (4) 2NaHCO3 J Na2CO3 + H2O + CO2 (5) giving the net reaction: CaCO3 + 2NaCl J Na2CO3 + CaCl2 (6) Stages (3) and (4) were operated in a continuous cycle. In principle, the ammonia was not consumed and only top-up quantities were required; the only waste product was the calcium chloride which, being soluble, could be discharged to drain. The ammonia-soda process prospered and production of soda increased rapidly. Although more capital-intensive than the Leblanc plants, it was less labour-intensive, more economical in use of raw materials, and had no serious waste problems. It presented a serious economic challenge to the Leblanc alkali industry and it was to be the soda process of the future. In the 1870s the Solvay process was not only established in England but also in the now unified Germany and the post-civil war USA, countries that had never had Leblanc plants and had been significant export markets for Britain. Consequently, British exports of alkali declined. For the Leblanc producers the competition was intense. But the mostly small producers using the Leblanc process (of whom there were now a fairly large number) fought back. For a while they had the advantage that most of their plants were fully depreciated whereas the large ammonia-soda plant had to bear heavy capital servicing 8 charges. Also, of considerable importance, they had the very big advantage of being able to sell the byproduct bleaching powder. They exploited this advantage further by forming the Bleaching Powder Association in 1883 to operate a cartel (which was quite legal in those days) to keep prices up. They also appreciated the need for cost-saving and thoughts turned to sulphur recovery.

Dyestuffs

 As noted earlier the synthetic dyestuffs industry started in 1857 with the manufacture of aniline dyes by Perkin. In the decades to follow, the range of synthetic dyes was extended considerably as great advances were made in organic chemistry – incidentally Kekulé postulated his ring structure of benzene in 1865, the year Hofmann returned to Germany. One dye of importance was alizarin, obtained from the plant madder which was very extensively used still in 1870; methods of alizarin synthesis were devised in 1868/69 by Gräbe and Liebermann in Germany and by Perkin in England, and commercial production started in both countries in the 1870s – on a larger scale in Germany. It was from this time that the German industry expanded rapidly with profits from alizarin manufacture an important source of finance for dyestuffs research and development; this factor, together with the greater availability of trained chemists in Germany compared with Britain, soon made Germany pre-eminent in synthetic dyestuffs production (Switzerland also became an important dyestuffs-producing country from the 1870s as a result of the establishment there of émigré producers from France escaping patent restrictions!). Thus Britain suffered a relative decline in the dyestuffs field even though there was expansion of production here too – some of it by German and Swiss firms. Britain actually became a net importer of synthetic dyes. 

Tuesday, 16 February 2016


                                                 Science and Truth

Nature is governed by hidden truth. The space, time, energy, and mass are expressions of Truth.  

The relations are expressed through mathematical language. The relation of Mass and Energy is, 
 E= Mc2  or  E= hf and like that. The truth is based on experimental results. The atom, the electron, the electronics, the present generation of mobile telephony, and the internet are some wonderful applications of science. 
Solar cells and power generation at satellites, the solar-powered plane, and atomic power are a few such examples of hidden Truth. 
Scientists took more than a century to understand the atom and molecules of matter; Right from Dalton's concept of atom; to modern atomic theory based on quantum theory. Man produced new atoms like Plutonium in Labs and used them as fission material. All countries are connected through information technology. There is no political control over human contacts globally. Living standard is getting changed very rapidly. But the revolution started in the 18th century, and within two hundred years of the scientific approach, many wonders have been achieved worldwide. The world has been reduced to a small area like a town. 

We could fly like a bird, we could launch satellites, we could generate radio signals and propagate messages through them. We could control the satellites and view the mother earth from space. 
We undertook a journey to nearby planets and beyond we are thinking to go. 
We investigated the laws of motion, and the law of universal gravitation and could predict the supernova, the conversion of mass into an infinite amount  of energy in the space around  , and so on. 
We discovered the information storage in DNA to produce a species of life. 

The list grows very fast in the coming decades. We clone animals, and plants  and tried to clone our own race, the human beings. We harnessed the atom to produce superconductivity, lasers, and fibers and had networks all around us, called internetworks. We sold purchased, and arranged couples through networks. Online has become an everyday matter for any type of business. 

A Set of colonies in different parts of the world could work as a single office. The boundaries began eliminated among nations. Power sharing is being done through the democratic process, throughout the planet. 

These changes happened just within a century of time-lapse. And that is the power of science! 
 
In fact, it seems to me that the observations on "black-body radiation", photoluminescence, the production of cathode rays by ultraviolet light, and other phenomena involving the emission or conversion of light can be better understood on the assumption that the energy of light is distributed discontinuously in space. According to the assumption considered here, when a light ray starting from a point is propagated, the energy is not continuously distributed over an ever-increasing volume, but it consists of a finite number of energy quanta, localized in space, which move without being divided and which can be absorbed or emitted only as a whole. 

Einstein put forth the energy-mass relation in 1905. Atom was split in 1942 and energy was released to destroy human habitat in Japan which saw the end of world war ii. However the nuclear energy was harnessed to generate power. U235, Pu239, and U233 being fission materials. The U 238 when used as blanket material in nuclear reactors, Pu 239 could be obtained. Thorium when placed as blanket material, U233 could be produced. 

India Turns to Thorium as Future Reactor Fuel 

Nuclear Energy Insight  

Winter 2012—Officials in India are ready to build a large-scale prototype of a reactor fueled by a combination of thorium and low-enriched uranium. 
Ratan Kumar Sinha, chairman of the Bhabha Atomic Research Center in Mumbai, recently told the U.K.’s Guardian newspaper, “The basic physics and engineering of the thorium-fueled Advanced Heavy Water Reactor are in place, and the design is ready.” He said the Indian government has begun a six-month search for a site for the 300-megawatt reactor while conducting confirmatory tests on the final design. 
India’s Advanced Heavy Water Reactor design would use the country’s abundant thorium supply. Sinha said the reactor could be operational by the end of the decade. 
One of the three elements widely considered to be useful in the generation of nuclear energy, thorium is three to four times more plentiful than uranium and is widely distributed in nature. India has one of the world’s largest thorium deposits. 
The element cannot be used alone in a reactor because it cannot split apart to release energy. However, it can be converted inside a reactor into the fissile isotope uranium-233 when used with other fissile materials such as uranium-235 or plutonium-239. 
Only a relatively small amount of uranium or plutonium is needed to convert thorium to uranium because the thorium will continue to create more fuel during normal operation in the reactor. 
The Indian plant will demonstrate the use of low-enriched uranium, which is readily available on the world market, to breed fuel from thorium. Previous thorium-based nuclear facilities used high-enriched uranium or plutonium to convert thorium. Low-enriched uranium carries a much smaller proliferation risk. 
Additionally, the used fuel from thorium reactors also mitigates proliferation concerns because it includes fewer radioactive byproducts than uranium. 
Scientists and engineers have long been interested in developing nuclear reactor technology based on thorium. In the 1960s and 1970s, thorium-based research reactors operated in the United States, Germany, and the Soviet Union. U.S. reactors in Pennsylvania (Peach Bottom and Shippingport) and Colorado (Fort St. Vrain) have used thorium. 
The thorium is placed within and around the reactor core, where it absorbs neutrons from the fission chain reaction and becomes uranium-233. The uranium is either extracted and manufactured separately into fuel or used directly within the same reactor. 

Friday, 23 October 2015

The Hydrogen Atom


Time-independent Schrödinger  equation 

The time-independent Schrödinger equation predicts that wave functions can form standing waves, called stationary states (also called "orbitals", as in atomic orbitals or molecular orbitals). These states are important in their own right, and if the stationary states are classified and understood, then it becomes easier to solve the time-dependent Schrödinger equation for any state. The time-independent Schrödinger equation is the equation describing stationary states. (It is only used when the Hamiltonian itself is not dependent on time. However, even in this case the total wave function still has a time dependency.) 
Time-independent Schrödinger equation (general 
E\Psi=\hat H \Psi
In words, the equation states: 
When the Hamiltonian operator acts on a certain wave function Ψ, and the result is proportional to the same wave function Ψ, then Ψ is a stationary state, and the proportionality constant, E, is the energy of the state Ψ. 
The time-independent Schrödinger equation is discussed further below. In linear algebra terminology, this equation is an eigenvalue equation. 
As before, the most famous manifestation is the non-relativistic Schrödinger equation for a single particle moving in an electric field (but not a magnetic field): 
Time-independent Schrödinger equation (single non-relativistic particle 
E \Psi(\mathbf{r}) = \left[ \frac{-\hbar^2}{2\mu}\nabla^2 + V(\mathbf{r}) \right] \Psi(\mathbf{r})



Application to the hydrogen atom 

Bohr's model of the atom was essentially a planetary one, with the electrons orbiting around the nuclear "sun." However, the uncertainty principle states that an electron cannot simultaneously have an exact location and velocity in the way that a planet does. Instead of classical orbits, electrons are said to inhabit atomic orbitals. An orbital is the "cloud" of possible locations in which an electron might be found, a distribution of probabilities rather than a precise location.[35] Each orbital is three dimensional, rather than the two dimensional orbit, and is often depicted as a three-dimensional region within which there is a 95 percent probability of finding the electron.[36] 
Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's electron as a wave, represented by the "wave function" Ψ, in an electric potential well, V, created by the proton. The solutions to Schrödinger's equation are distributions of probabilities for electron positions and locations. Orbitals have a range of different shapes in three dimensions. The energies of the different orbitals can be calculated, and they accurately match the energy levels of the Bohr model. 
Within Schrödinger's picture, each electron has four properties: 
  1. An "orbital" designation, indicating whether the particle wave is one that is closer to the nucleus with less energy or one that is farther from the nucleus with more energy; 
  2. The "shape" of the orbital, spherical or otherwise; 
  3. The "inclination" of the orbital, determining the magnetic moment of the orbital around the z-axis. 
  4. The "spin" of the electron. 
The collective name for these properties is the quantum state of the electron. The quantum state can be described by giving a number to each of these properties; these are known as the electron's quantum numbers. The quantum state of the electron is described by its wave function. The Pauli exclusion principle demands that no two electrons within an atom may have the same values of all four numbers. 
Image
The shapes of the first five atomic orbitals: 1s, 2s, 2px, 2py, and 2pz. The colours show the phase of the wave function. 
The first property describing the orbital is the principal quantum number, n, which is the same as in Bohr's model. n denotes the energy level of each orbital. The possible values for n are integers: 
n = 1, 2, 3\ldots
The next quantum number, the azimuthal quantum number, denoted l, describes the shape of the orbital. The shape is a consequence of the angular momentum of the orbital. The angular momentum represents the resistance of a spinning object to speeding up or slowing down under the influence of external force. The azimuthal quantum number represents the orbital angular momentum of an electron around its nucleus. The possible values for l are integers from 0 to n − 1: 
l = 0, 1, \ldots, n-1.
The shape of each orbital has its own letter as well. The first shape is denoted by the letter s (a mnemonic being "sphere"). The next shape is denoted by the letter p and has the form of a dumbbell. The other orbitals have more complicated shapes (see atomic orbital), and are denoted by the letters d, f, and g. 
The third quantum number, the magnetic quantum number, describes the magnetic moment of the electron, and is denoted by ml (or simply m). The possible values for ml are integers from −l to l: 
m_l = -l, -(l-1), \ldots, 0, 1, \ldots, l.
The magnetic quantum number measures the component of the angular momentum in a particular direction. The choice of direction is arbitrary, conventionally the z-direction is chosen. 
The fourth quantum number, the spin quantum number (pertaining to the "orientation" of the electron's spin) is denoted ms, with values +12 or −12.