• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/45

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

45 Cards in this Set

  • Front
  • Back

National Nanotechnology Initiative (NNI) is given from an article (Roco, 1999):

Nanotechnology is the ability to control and restructure the matter at the atomic and molecular levels in the range of approximately 1–100nm, and exploiting the distinct properties and phenomena at that scale as compared to those associated with single atoms or molecules or bulk behaviour. The aim is to create materials, devices, and systems with fundamentally new properties and functions by engineer- ing their small structure

In 2010, the International Standardization Organization (ISO) Technical Committee 229 on nanotechnologies (ISO, 2010) issued a definition of nanotechnology

The application of scientific knowledge to manipulate and control matter in the nanoscale range to make use of size- and structure-dependent properties and phenomena distinct from those at smaller or larger scales.

There are two approaches for performing research within the field of nanotechnology:

top– down approach and the bottom–up approach.

top–down approach

is characterized by a material being processed in bulk which is shaped into the finished product. In this approach, the positioning of individual atoms is not controlled during operation.

bottom–up approach

describes a manufacturing process in which it is possible to control individual atoms. Bottom–up approaches rely on either (1) the chemical properties of a single mol- ecule to self-organize or self-assembly in some useful configuration or (2) positional assembly.

The first mention of some of the distinguishing concepts in nano- technology appeared in 1867 by James Clerk Maxwell.

He proposed a thorough experiment known as Maxwell’s demon which is a tiny entity able to handle individual molecules.

The first observations on the nanoscale took place in the first decade of the 20th century by Richard Adolf Zsigmondy.

He was preforming a detailed study on gold sols and other nanomaterials with sizes 10nm and less, which was published in 1914. He was able to determine particle sizes smaller than the wavelength of light through the use of ultramicroscopes which employed the dark field method. He was also the first per- son to use the nanometer for characterizing particle sizes which he determined to be 1/1,000,000 of a millimeter. He developed the first system of classification based on the particle size in the nanometer range.

Irving Langmuir and Katharine B. Blodgett

first introduced the concept of the monolayer, a layer of material one molecule thick for which Irving won the noble prize.

involved the possibility of scaling down letters small enough to be able to print the entire Encyclopedia Britannica on the head of a pin. The prize for the second challenge was accepted by

Tom Newman in 1985.

At the end of his talk, he proposed two challenges for which he offered a cash price: one was the construction of a nanomotor which was achieved in 1960

William McLellan

important historical moment took place in 1965 when Gordon Moore observed

silicon transistors were undergoing continual scaling down, which was later codified as Moore’s Law. Since his observation in 1965, transistor sizes have decreased from 10µm to the range of 45–60 nm as of 2007.

in a 1974 paper by Norio Taniguchi of the Tokyo Science University (Lewis, n.d.).

“Nano-technology mainly consists of the processing of separation, consolidation, and deformation of materials by one atom or one mole- cule.”

Gordon Moore (1965)

observed that silicon transistors were undergoing continual scaling down (later Moore’s Law) resulting totransistor sizes decreased from 10 μm to the range of 45–60 nm as of 2007N

Norio Taniguchi (1974)

“Nano-technology mainly consists of the processing of separation, consolidation, and deformation of materials byone atom or one molecule.

Dr. Tuomo Suntola (1974)

Developed and patented the process of atomic layer deposition used for depositing uniform thin films of oneatomic layer at a time

Dr. K. Eric Drexler (1980)

- explored the idea that nanotechnology was deterministic rather than stochastic, meaning that nanotechnology isnot based on random elements. - promoted the technological significance of nanoscale phenomena and devices through many of his speeches andbooks. - Vision is often called as “molecular nanotechnology” or “molecular manufacturing

Richard Jones (2004)

- wrote “Soft Machines”- describes radical nanotechnology as a deterministic idea of nanoengineering machines that do not take intoaccount nanoscale challenges


- explains that soft nanotechnology or biomimetic nanotechnology is the best way to design functional nanodevices

Soft nanotechnology

 can be the application of learning lessons from biology on how things work, chemistry to preciselyengineer such devices, and stochastic physics to model the system and its natural processes in detail.

Very first important technological advances:which took place in the early 1980s

1. Birth of cluster science2. The invention of the scanning tunneling microscope (STM)- An instrument of imaging surfaces at the atomic level - invented by Gerd Binnig and Heinrich Rohrer for which they won the Nobel Prize- A good resolution for an STM is considered 0.1 nm lateral resolution and 0.01 nm depth resolution

Don Eigler


- was the first person that to manipulate atoms using STM in 1989. - He used 35 Xenon atoms to spell out the IBM logo


- was the first person that to manipulate atoms using STM in 1989. - He used 35 Xenon atoms to spell out the IBM logo

Huffman and Kratschmer (1990s)

First discovered how to synthesize and purify large quantities of fullerenes

Dr. T. Ebbesen

- discovery and characterization of carbon nanotubes (CNTs) - gave a speech that spurred the interest in hundreds of researchers that helped further develop the field ofnanotube-based nanotechnology

AFM (Atomic Force Microscope)

- based on a cantilever with a sharp tip that is used to scan the surface of the specimen- The cantilever itself is typically silicon or silicon nitride with a tip radius of curvature on the nanoscale. When thesurface of the cantilever is brought in proximity to the sample surface, forces between the tip and the surfacesample lead to deflection of the cantilever according to Hooke’s law

NNI (National Nanotechnology Initiative) (1999) 4 goals:

1. Advance a world- class nanotechnology research and developmentprogram. 2. Foster the transfer of new technologies into products for commercialand public benefit. 3. Develop and sustain educational resources to advance nanotechnology. 4. Support responsible development of nanotechnology

Some examples of using nanotechnology in the areas of computing, quantum computing, and communicationdevices can be seen in

 semiconductors,  thin film storage (TFS),  magnetic random-access memory.

Metal oxide semiconductor processor technology (Intel)

- used to make integrated circuit chips- giving higher speed, higher density, and lower power consumption using quantum dot mechanics

TFS flash memory (Freescale)

- used in microcontrollers, utilizing silicon nanocrystals as the charge storage layer. - The nanocrystal layer enables higher density arrays, lower power operation, faster erase times, andimproved reliability.


- Microcontrollers are the brains of a variety of industrial and consumer products.

Magnetic random-access memory (Everspin)

based on nanometer-scale magnetic tunnel junctions. - have many industrial applications such as saving data during a system crash, enabling resume-playfeatures, quick storage, and retention of data encryption during shutdown

(Scanning Tunneling Microscopy) STM technique

 invented in 1981, was a totally new one since it could image atom arrangement on a surface in real space for the first time.  It is so invaluable to science and technology that the inventors of STM shared the Nobel Prize in Physics with the inventor ofthe electron microscopy in 1986 (Heng-Yong, 2010).  Principle: tunneling of electrons between two electrodes under an electric field.  (However, development of the concept of electron tunneling into an image is not simple.) This is because of the distancebetween the two electrodes must be close to each other by 1 nanometer. Also, surface cleanness (pleasingly simple havingedges that are straight and smooth) and vibration free system is essential to measure the tunneling current accurately.  Function/Working Process:- By approaching the tip with a specified bias and current, the tip will be held at a certain distance from the samplesurface so that the specified current (set point) is realized.

Constant Current Method

 By scanning the tip across the sample under this condition, the system compares the measured current I and the setpoint current Is. It uses the error signal I−Is as the feedback parameter to apply an appropriate voltage to the z-piezoto adjust the tip–sample distance so as to diminish the error signal thus providing the height profile of the“topography” of the surface

Constant Height method:

 The other operation (constant height) mode is to keep the tip–sample distance while recording the current, which apparently requires the scanned area to be flat.

Tunneling

is a consequence of the wave nature of matter, where the quantum wave function describes the state of a particle or otherphysical system

Electron Microscopy

Electron microscopy was first found in Germany in 1931. Electron microscopy is an important and very versatile technique for analysis of nanostructure characterization since it canprovide direct images from which structural details (morphology) can be obtained. Electron microscopy is being used to observe a wide range of biological and inorganic specimens. Electron microscopy utilizes parallel beams of electrons that are accelerated by high voltages and focused through a series ofelectrostatic or magnetic lenses to illuminate the specimen and produce a magnified image (Martin-Palma & Lakhtakia, 2010). Electron microscopy includes transmission electron microscopy (TEM) and scanning electron microscopy (SEM). Highresolution(HR) version in electron microscopy which refers to HRTEM and HRSEM are also existing methods

Types of Electron Microscopy

Scanning Electron Microscopy (SEM)Transmission Electron Microscopy (TEM)

Scanning Electron Microscopy (SEM)

The electron beam is focused over the sample in a manner similar to that used in old-fashioned television sets with cathoderay tubes. The number of backscattered electrons and/or the secondary electrons generated by the beam that emerge from the sampledepends on the local composition and topography of the sample. These electrons are collected by an electron detector, and an image is formed by plotting the detector signal as a function ofthe beam location. This technique has lower resolution than TEM, typically over 1 nm (Martin-Palma & Lakhtakia, 2010). The electron gun, which is on the top of the column, produces the electrons and accelerates them to an energy level of 0.1–30keV. The diameter of electron beam produced by hairpin tungsten gun is too large to form a HR image. So, electromagnetic lensesand apertures are used to focus and define the electron beam and to form a small focused electron spot on the specimen. This process demagnifies the size of the electron source (~50μm for a tungsten filament) down to the final required spot size(1–100nm). A high-vacuum environment, which allows electron travel without scattering by the air, is needed. The specimen stage, electron beam scanning coils, signal detection, and processing system provide realtime observation andimage recording of the specimen surface (Zhou & Wang, 2006)

Transmission Electron Microscopy (TEM)

n TEM, the electron beam travels through the sample and is condensed on a detector plate. The schematic diagram shows the instrumentation of TEM. Images are formed because different atoms interact with andabsorb electrons to a different extent. Since electrons interact much more strongly with matter than do X-rays or neutrons withcomparable energies or wavelengths, the best results are obtained for sample thicknesses that are comparable to the mean freepath of the electrons (the average distance travelled by the electrons between scattering events). The recommended thicknessvaries from a few dozen nanometers for samples containing light elements to tens or hundreds of nanometers for samples madeof heavy elements. The resolving power of TEM is theoretically subatomic, although resolutions around 0.1nm have been achieved in practice. Additionally, TEM allows researchers to generate diffraction patterns for determining the crystallographic structures ofsamples (Martin-Palma & Lakhtakia, 2010). Yao et al. (2011) described a methodology based on hollow cone dark-field (HCDF) TEM to study dislocation structures inboth nano- and microcrystalline grains. The conventional approach based on a two-beam condition, which was commonly usedto obtain weak-beam dark-field TEM images for dislocation structures, was very challenging to employ in study ofnanocrystalline materials (especially when grains are less than 100 nm in diameter)

The main difference between SEM and TEM

Is that SEM creates an image by detecting reflected or knocked-off electrons, while TEM uses transmitted electrons (electrons that are passing through the sample) to create an image. In general, if you need to look at a relatively large area and only need surface details, SEM is ideal. If you need internaldetails of small samples at near-atomic resolution, TEM will be necessary

.Diffraction

The process by which a beam of light or other system of waves is spread out as a result of passing througha narrow aperture or across an edge, typically accompanied by interference between the wave forms produced.

History of XRD

 The technique was discovered in 1895 and was then accepted as a characterization method around 1922.  X-ray diffraction (XRD) has only been commonly used on nanoparticle characterization during the last 10 years

X-ray Diffraction

The main use of XRD is to determine the arrangement of atoms within a crystal of the specimen.  In XRD, a collimated beam of X-rays is directed at the sample, and the angles at which the beam is diffracted aremeasured. When the beam interacts with an arbitrarily chosen material, its atoms may scatter the rays into all possibledirections.  The intensities and positions of these reflections make up the basic experimental data from the crystal and using amathematical relation to construct a three dimensional reconstructed image of the protein called 'electron density map'

.Spectroscope Techniques

-Spectroscopy techniques are methods that use radiated energy to analyze properties or characteristics of materials. -It measures intensity as a function of wavelengths and produces spectrums for comparison purposes

Infrared spectroscopy

-Measures the composition of matter by the light that is absorbed, reflected, or emitted-Used to identify the chemical composition of a sample-The instrument used for this type of analysis is called an infrared spectrometer

Ultraviolet-visible spectroscopy

-Used to analyze the target sample by measuring visible and ultraviolet light emissions-most commonly used for biological macromolecules and inorganic compounds in a liquid solution

X-ray Spectroscopy

-Used to analyze materials by analyzing their interaction with the X-ray part of the electromagnetic spectrum-Measures the change in the energy level of inner-orbital electrons

Fourier Transform Infrared Spectroscope

-A type of absorbance analyzing technique-It measures the amount of light at different wavelengths that is absorbed-Can be used to analyze characteristics such as film thickness, optical characteristics of nanoparticles, optical properties ofcoating, particle size, and composition.