School of Information Technology
Indian Institute of Technology Kharagpur
Seminar : Autumn Semester 2005
MTech - Batch 2005
     
SNo
Title
Date
1
05-10-2005
2
05-10-2005
3
05-10-2005
4
05-10-2005
5
19-10-2005
6
19-10-2005
7
19-10-2005
8
26-10-2005
9
26-10-2005
10
26-10-2005
11
09-11-2005
12
09-11-2005
13
09-11-2005
14
16-11-2005
15
16-11-2005
16
16-11-2005
 
Biometrics: Fingerprint and Iris
Abstract   Lt. Cdr. V. Pravin
Biometrics is a means of using parts of the human body as a kind of permanent password. Just as your fingerprints are unlike those of any other person, your eyes, ears, hands, voice, and face are also unique. Technology has advanced to the point where computer systems can record and recognize the patterns, hand shapes, ear lobe contours, and a host of other physical characteristics. Using biometrics, devices can be empowered with the ability to instantly verify your identity and deny access to everybody else. Tokens, such as smart cards, magnetic stripe cards, and physical keys can be lost, stolen, or duplicated. Forgotten passwords and lost smart cards make life difficult for users and waste the expensive time of system administrators. In biometrics the concerned person himself is the password, as biometrics authentication is based on the identification of an intrinsic part of a human being. In this seminar I shall explain about biometrics and the two types of biometric currently used widely that is Fingerprint and Iris recognition.
     
For the Presentation Slides Clicke here  
Email : vpravin@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~vpravin
Presentation [ppt]
Report [doc]
 
 
 
Smart Dust
Abstract   Shruti Srivastava
Abstract: Smart dust is a tiny dust size device with extra-ordinary capabilities. Smart dust combines sensing, computing, wireless communication capabilities and autonomous power supply within volume of only few millimeters and that too at low cost. These devices are proposed to be so small and light in weight that they can remain suspended in the environment like an ordinary dust particle. These properties of Smart Dust will render it useful in monitoring real world phenomenon without disturbing the original process to an observable extends. In this discussion we will discuss about the techniques which can be employed in communication between the Smart Dust motes and the base station and their pros and cons.
     
For the Presentation Slides Clicke here  
Email : shrutis@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~shrutis
Presentation [ppt]
Report [doc]
 
 
 
Quantum Computing
Abstract   Pantha Kanti Nath
In quantum computers we exploit quantum effects to compute in ways that are faster or more efficient than, or even impossible, on conventional computers. Quantum computers use a specific physical implementation to gain a computational advantage over conventional computers. Properties called superposition and entanglement may, in some cases, allow an exponential amount of parallelism. Also, special purpose machines like quantum cryptographic devices use entanglement and other peculiarities like quantum uncertainty. Quantum computing combines quantum mechanics, information theory, and aspects of computer science. Quantum computers require quantum logic, something fundamentally different to classical Boolean logic. This difference leads to a greater efficiency of quantum computation over its classical Counter–part. The field is a relatively new one that promises secure data transfer, dramatic computing speed increases, and may take component miniaturization to its fundamental limit.
     
For the Presentation Slides Clicke here  
Email : panthan@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~panthan
Presentation [pdf]
Report [pdf]
 
 
 
Linux Process Management
Abstract   Dhage Manoj M.
In theory, there is no difference in theory and practice, but in practice, there is. Learning the standard operating system concepts breaks the ice for studying a practical operating system implementation. In this seminar we will have a look on Linux process management and scheduler. I will explain the Linux 2.6.8.1 CPU scheduler. Process management is also in context of this scheduler. The various attributes of the process descriptor are used to manage and control the lifetime of a process. We will see in detail how the process descriptor is implemented. We will see the data structures kernel uses for managing all the processes. Till 2.4.x series of schedulers, the running time of scheduler increases linearly with the number of processes in the system. The asymptotic running time of these schedulers is of the order O(n), where n is the number of processes in the system. But the 2.6.8.1 scheduler performs all it’s duties in O(1) time. It means that it take a constant amount of time for scheduling purpose, independent of the number of processes in the system. There is no algorithm in this scheduler that takes more time than this. We will see how this is achieved.
     
For the Presentation Slides Clicke here  
Email : dhagem@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~dhagem
Presentation [ppt]
Report [doc]
 
 
 
Kerberos
Abstract   Lt. Cdr. Samit Mehra
Kerberos is a security system that helps prevent people from stealing information that gets sent across the wires from one computer to another. Usually, these people are after your password. The name "Kerberos" comes from the mythological three-headed dog whose duty it was to guard the entrance to the underworld. The Kerberos security system, on the other hand, guards electronic transmissions that get sent across the Internet. It does this by scrambling the information -- encrypting it -- so that only the computer that's supposed to receive the information can unscramble it. In addition, it makes sure that your password itself never gets sent across the wire: only a scrambled "key" to your password. Kerberos is necessary because there are people who know how to tap the lines between computers and listen for your password. They do this with programs called "sniffers", and the only way to stop them would be to physically guard every inch of the Internet ... computers, cables and all. This, of course, is impossible. As long as there are physically insecure networks in the world, we'll need something like Kerberos to maintain the integrity and security of our electronic communications
     
For the Presentation Slides Clicke here  
Email : samitm@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~samitm
Presentation [ppt]
Report [doc]
 
 
 
DNA Computing
Abstract   Ayan Kumar Roy
DNA (Deoxyribose Nucleic Acid) computing, also known as molecular computing is a new approach to massively parallel computation based on groundbreaking work by Adleman. DNA computing was proposed as a means of solving a class of intractable computational problems in which the computing time can grow exponentially with problem size (the 'NP-complete' or non-deterministic polynomial time complete problems).A DNA computer is basically a collection of specially selected DNA strands whose combinations will result in the solution to some problem, depending on the problem at hand. Technology is currently available both to select the initial strands and to filter the final solution. Conventional computers use miniature electronic circuits etched on silicon chips to control information represented by electrical impulses. However, this silicon technology is starting to approach the limits of miniaturization, beyond which it will not be possible to make chips more powerful. DNA computing, on the other hand, represents information as a pattern of molecules arranged along a strand of DNA. These molecules can be manipulated, copied and changed by biochemical reactions in predictable ways through the use of enzymes. The appeal of DNA computing lies in the fact that DNA molecules can store far more information than any existing conventional computer chip. It has been estimated that a gram of dried DNA can hold as much information as a trillion CDs. Moreover, in a biochemical reaction taking place in a tiny surface area, hundreds of trillions of DNA molecules should be able to operate in concert, which would create a parallel processing system with the power of the largest current supercomputers. A highly interdisciplinary study, DNA computing is currently one of the fastest growing fields in both Computer Science and Biology, and its future looks extremely promising.
     
For the Presentation Slides Clicke here  
Email : ayanr@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~ayanr
Presentation [ppt]
Report [doc]
 
 
 
Hyperthreading
Abstract   Nagendra Rao Katoori
Hyper-Threading Technology brings the concept of simultaneous multi-threading to the Intel Architecture. Hyper-Threading Technology makes a single physical processor appear as two logical processors. The physical execution resources are shared and the architecture state is duplicated for the two logical processors. From a software or architecture perspective, this means operating systems and user programs can schedule processes or threads to logical processors as they would on multiple physical processors. From a microarchitecture perspective, this means that instructions from both logical processors will persist and execute simultaneously on shared execution resources. This seminar presents the Hyper-Threading Technology architecture.
     
For the Presentation Slides Clicke here  
Email : nkatoori@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~nkatoori
Presentation [ppt]
Report [doc]
 
 
 
Routing Protocols in Mobile Adhoc Networks
Abstract   Nitesh Jain
Ad Hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any centralized administration, in which individual nodes cooperate by forwarding packets to each other to allow nodes to communicate beyond direct wireless transmission range. Routing is a process of exchanging information from one station to other stations of the network. Routing protocols of mobile ad-hoc network tend to need different approaches from existing Internet protocols because of dynamic topology, mobile host, distributed environment, less bandwidth, less battery power. Ad Hoc routing protocols can be divided into two categories: table-driven (proactive schemes) and on-demand routing (reactive scheme) based on when and how the routes are discovered. In Table-driven routing protocols each node maintains one or more tables containing routing information about nodes in the network whereas in on-demand routing the routes are created as and when required. Some of the table driven routing protocols are Destination Sequenced Distance Vector Routing protocols (DSDV), Clusterhead Gateway Switching Routing Protocol (CGSR), Hierarchical State Routing (HSR), and Wireless Routing Protocol (WRP) etc. The on-demand routing protocols are Ad Hoc On-Demand Distance Vector Routing (AODV), Dynamic Source Routing (DSR), and Temporally Ordered Routing Algorithm (TORA). There are many others routing protocols available. Zone Routing Protocol (ZRP) is the hybrid routing protocol.
     
For the Presentation Slides Clicke here  
Email : niteshj@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~niteshj
Presentation [ppt]
Report [doc]
 
 
 
Clock Less Chip
Abstract   K. Subrahmanya Sreshti
Clock less approach, which uses a technique known as asynchronous logic, differs from conventional computer circuit design in that the switching on and off of digital circuits are controlled individually by specific pieces of data rather than by a tyrannical clock that forces all of the millions of the circuits on a chip to march in unison. It overcomes all the disadvantages of a clocked circuit such as slow speed, high power consumption, high electromagnetic noise etc. For these reasons the clock-less technology is considered as the technology, which is, going to drive majority of electronic chips in the coming years.
     
For the Presentation Slides Clicke here  
Email : kss@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~kss
Presentation [ppt]
Report [doc]
 
 
 
Zero Configuration Management
Abstract   Pusparaj Mahapatra
TCP/IP networking has been deployed in many environments and has been especially successful in large networks such as those in universities, corporations, and government agencies. Operating an IP network requires specialized technical skills. For this reason, IP networking has not been especially well suited for smaller networks (such as in the home, in small businesses etc.), where capable network administration is not feasible. Currently developments in the computer industry as well as in the Internet Engineering Task Force (IETF) hold out the promise that it will soon be possible to deploy and use IP-based hosts in environments completely lacking in administration and infrastructure which is called as Zeroconfiguration networking. Zero Configuration Networking, also known as ZeroConf, is a networking that needs nothing to be pre-configured and no administration to operate. Zero Configuration Networking uses industry standard IP protocols to allow devices to automatically find each other without the need to enter IP addresses or configure DNS servers. This type of networking will eliminate the need of so called DNS, DHCP etc. The goal of the Zero Configuration Networking is to enable networking in the absence of configuration and administration. This automatic configuration though is feasible for small networks, but it will cover the whole network in future.
     
For the Presentation Slides Clicke here  
Email : pusparajm@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~pusparajm
Presentation [ppt]
Report [doc]
 
 
 
Holographic Memory
Abstract   Sourabh Gupta
Devices that use light to store and read data have been the backbone of data storage for nearly two decades. In this category of storage, DVD can hold the maximum data. To increase storage capabilities scientist are now working on a new optical storage method called Holographic memory. It is a technique that can store information at high density inside crystals or photopolymers. In this technology the laser go beneath the surface and use the volume of the recording medium for storage, instead of only the surface area. Holographic Memory provides a faster data transfer rate also. Holographic memory definitely have a potential to become the next generation of storage media. The Disks based on these technology are called Holographic Versatile Disk (HVD) storage media uses this technology. These disks have the capacity to hold upto 3.9 terabytes (TB) of information. The HVD also has a transfer rate of 1 Gbits/s. Like other general media HVD can also be divided into write once and rewriteable. Rewritable HVD can be achived by photorefractive effect in crystals.
     
For the Presentation Slides Clicke here  
Email : sourabhg@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~sourabhg
Presentation [ppt]
Report [doc]
 
 
 
Parasitic Computing
Abstract   Kunal Goswami
The Net is a fertile place where new ideas/products surface quite often. We have already come across many innovative ideas such as Peer-to-Peer file sharing, distributed computing and the like. Parasitic computing, which harnesses the computing power of machines that spread across the Net to accomplish complex computing tasks, is new in this category. The successor to distributed computing has opened up a whole new can of worms. It works by exploiting a weakness in the TCP/IP system's error checking system. The problem is that forcing target machines into performing calculations puts a greater load on them than a regular packet would, and the server owner has not agreed to take part - in effect the technique is stealing processing power, but without breaking any laws. Although the technique is too slow to have much practical value at present, it does raise questions for the future.
     
For the Presentation Slides Clicke here  
Email : kunalg@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~kunalg
Presentation [ppt]
Report [doc]
 
 
 
 
Game Playing in Artificial Intelligence
Abstract   Phapale Gaurav S
Game theory has its history from the birth of the Artificial Intelligence. Most of the games in AI are two players, zero sum games. Game trees are used to represent the game. Minimax tree is the representation of game configuration possibilities along with the utility values of the players. Minimax tree is used to take a decision during game playing. Most of the games have huge size of game trees and time constraints. Alpha beta pruning is used to take the decision within the given time constraints without searching through whole game trees.
     
For the Presentation Slides Clicke here  
Email : gauravp@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~gauravp
Presentation [ppt]
Report [doc]
 
 
 
Supervisory Control and Data Acquisition
Abstract   Prasad Mane
SCADA stands for Supervisory Control and Data Acquisition. As the name indicates, it is not a full control system, but rather focuses on the supervisory level. It is a computer system for gathering and analyzing real time data. SCADA systems are used to monitor and control a plant or equipment in industries such as telecommunications, water and waste control, energy, oil and gas refining and transportation. A SCADA system gathers information, such as where a leak on a pipeline has occurred, transfers the information back to a central site, alerting the home station that the leak has occurred, carrying out necessary analysis and control, such as determining if the leak is critical, and displaying the information in a logical and organized fashion. SCADA systems can be relatively simple, such as one that monitors environmental conditions of a small office building, or incredibly complex, such as a system that monitors all the activity in a nuclear power plant or the activity of a municipal water system. This paper describes the SCADA systems in terms of their architecture, their interface to the process hardware, the functionality and the application development facilities they provide.
     
For the Presentation Slides Clicke here  
Email : prasadm@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~prasadm
Presentation [ppt]
Report [doc]
 
 
 
Software Agents
Abstract   Prateek Rastogi
The next wave of technological innovation must integrate linked organizations and multiple application platforms. Developers must construct unified information management systems that use the World Wide Web and advanced software technologies. Software agents, one of the most exciting new developments in computer software technology, can be used to quickly and easily build integrated enterprise systems. The idea of having a software agent that can perform complex tasks on our behalf is intuitively appealing. The natural next step is to use multiple software agents that communicate and cooperate with each other to solve complex problems and implement complex systems. Software agents provide a powerful new method for implementing these next-generation information systems.
     
For the Presentation Slides Clicke here  
Email : prateekr@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~prateekr
Presentation [ppt]
Report [doc]
 
 
 
IP Spoofing
Abstract   Vipin Singh Mewar
IP Spoofing replaces the IP address of sender or destination with different address. There are authentication routines that are totally invisible to the user occur on the internet between machines to identify each other. One machine demands some form of identification from another. Until this identification is produced and validated, no transactions occur between the machines engaged in the challenge-response dialog.In IP spoofing these authentication routines are attacked and gain the unauthorized access of a machine . There are several technique to avoid IP spoofing , one of them is IP trace back ,IP trace back use for determining the source as well as the full path taken by the attack packet.
     
For the Presentation Slides Clicke here  
Email : vipinms@sit.iitkgp.ernet.in
Personal WebPage : http://sit.iitkgp.ernet.in/~vipinm
Presentation [ppt]
Report [doc]
 
 
All rights Reserved. ©2005.
School of Information Technology