Sridhar Iyer: Project Students
This page is perpetually under construction. Reports/Slides of each project will be available only after the abstract appears on this page.
Last Updated: Dec 2019.
||Dr. Veenita Shah
(PhD in Molecular Biology from Oklahoma State University)
|Designing learner centric MOOCs.
(Fellowship supported by NextEducation Research Lab)
||Dr. Gargi Banerjee
(PhD in Educational Technology from IIT Bombay)
|Teacher Use of Educational Technology.
(Fellowship supported by NextEducation Research Lab)
||Dr. Farida Umrani-Khan
(PhD in Psychology from IIT Bombay)
|Computer literacy and e-learning technologies in the Indian context.
Computer Masti books;
(Fellowship supported by Project OSCAR)
||Dr. Leena Chandran-Wadia
(PhD in Physics from IISc Bangalore)
|Medium access control in wireless networks.
(Fellowship supported by Infosys)
TELoTS: deductive reasoning skills:
TELoTS: Network Design skills:
TELoTS: Troubleshooting skills:
[Co-guide: Dr. M Sasikumar]
[Jan 2014 - June 2019 (defended Nov 2019)]
Teaching-Learning of Expand-Reduce skills in the context of Software Design:
[Co-guide: Dr. M Sasikumar]
Thesis: PDF file [6.9 M].
Slides: PPT [2.5 M].
Software design problems are ill-structured, in which the problem space and solution space are
not well-defined. In problem space, the requirements are not well defined, and a designer has to
formulate incomplete requirements into specific data models, main software functions, and sub-
functions. In solution space, there may be multiple solution paths, alternative design options, and
the criteria to evaluate and select an optimal solution may not be clearly stated. The issues in
software design are that the quality of the design is heavily dependent on the expertise and
experience of the designer. Novices tend to think at the programming level and reduce early to
the solution design, which affects the quality of the design in many ways.
Research studies have shown that systematically expanding and reducing the problem-solution
spaces improves the quality of the solution design. In this thesis, we refer to the ability to
expand the problem-solution space and eventually reduce towards a solution, as expand-reduce
(ER) skills. In problem space, the ER skill is the ability to explore the problem as a whole and
reduce the problem into sub-problems based on the goal to be achieved. In solution space, the ER
skill is the ability to generate alternative solutions and reduce to one solution by evaluating and
selecting based on selection criteria.
The existing research studies have established the importance of ER skills in solving ill-
structured problems, but not much research is done in the direction of teaching-learning of ER
skills to novices. To address this research gap, the broad research objective of this thesis is:
―teaching-learning of ER skills to novices using technology-enhanced learning environment.‖
Our research is scoped to teaching-learning of ER skills to undergraduate computer engineering
students in the context of solving software design problems using appropriate data structures and
We have used design-based research (DBR) methodology to design the technology-enhanced
learning environment, named Fathom, for teaching-learning of ER skills. We have completed
three DBR cycles to design-evaluate-redesign our intervention based on the feedback from
learners and practitioners. Four research studies were carried out, which included two
exploratory studies and two pretest-posttest experimental studies (Ntotal=200). Both quantitative
and qualitative data were collected and analyzed. Quantitative data was used to measure the
learning of ER skills by comparing the learner‘s performance in pretest, intervention, and
posttest. Qualitative data in the form of log data, screen capture, and focus group interviews were
used to analyze the behaviors exhibited by learners while interacting with the learning
environment and student perceptions about learning ER skills.
The results showed that the Fathom was effective in learning ER skills for novices. The major
contributions of this thesis are: providing insights about the cognitive biases of novice towards
applying ER skills, identification of effective cognitive and metacognitive scaffolds in
technology-enhanced learning environment, identification of ER cognitive tools and, the
development of learning environment for learning ER skills, in the context of software design.
[Jan 2013 - Jan 2018 (defended April 2018)]
Visual analytics of cohorts in educational datasets:
[Co-guide: Prof. Anirudha Joshi]
Thesis: PDF file [16.0 M].
Slides: PPT [8.8 M].
Learning Analytics is one of the focus areas to understand teaching learning practices in the
current technology enabled learning era. The insights gained thereby assists to design better
learning experience for students. We consider three levels to analyze learning data. A Micro
level view focuses on an individual learner. A Macro level view focuses on the overall group
of learners, for instance a class of students. A Meso-level views analyze cohorts (sub-groups)
in those learners. In this thesis we have conceptualized Interactive Stratified Attribute Tracking
(iSAT), as a meso-level visual analysis model for educational data. It helps to track transitions
across time or across attributes of collected data and thus build a narrative about the learners or
learning context as it changes across time or across attributes. I adopted Design and
Development Research methodology to conduct the research in three phases. Initially in the
Need and Context analysis phase, I established the need of analyzing transitions at a meso-level
by stakeholders. I also reviewed the current learning analytics techniques for analysing cohorts
and the limitations of existing dashboards and visualizations to support such an analysis. Based
on those findings, I set the goal of design, development and evaluation. Next, in the Design and
Development phase, I followed the Design Science paradigm to create iSAT model for visual
cohort analysis at the meso-level. There were three stages in the design and development phase;
(i) Genesis stage of the iSAT model, (ii) Refinement of that model and (iii) Implementing the
iSAT model as a web-based free access tool. I defined the constructs of that meso-level analysis
and evolved the methods involved in generating, representing and interpreting the information.
iSAT model was applied by 12 researchers in 9 different scenarios to analyze their educational
datasets, resulting in 12 peer-reviewed published research studies (6 conferences, 1 journal and
in 2 thesis). Further, to proliferate the iSAT model and tool among stakeholders, we conducted
4 iSAT workshops in 3 international conferences and 1 in-house symposium. In the Evaluation
phase, we synthesized the usefulness of iSAT model from the 12 case studies, studied
perception of first time users of iSAT and analysed applicability. iSAT helped both instructors
and researchers to provide an overview of the transition patterns based on which they
understand dynamics of the cohort and compare them. It can aid instructors in instructional
decisions making and researchers to refine their analysis from the point of view of cohorts. The
mean SUS score for iSAT tool was 71.57, indicating it as an acceptable system for analysis by
[July 2012 - Jan 2018 (defended July 2018)]
Fostering Cognitive Processes of Knowledge Integration through Exploratory Question-Posing:
Thesis: PDF file [8.9 M].
Slides: PDF [6.9 M].
When students encounter new knowledge, it is often fragmented and not well connected
with their existing knowledge. Knowledge fragmentation is often larger for a learner who is new
to a topic. Supporting knowledge integration (KI) is crucial to overcoming learners’ knowledge
fragmentation. Moreover, better KI ensures deeper conceptual understanding of a science topic.
This thesis aims to explicitly target the improvement of learners’ cognitive processes of KI.
KI has been defined as, "the process by which learners sort out connections between
new and existing ideas to reach more normative and coherent understanding of science.” It
is recommended that instruction should support at least following cognitive processes: (i)
Eliciting prior knowledge that may be related to the new knowledge; (ii) Focusing on the new
knowledge; (iii) Distinguishing ideas - identifying conflicts, inconsistencies, and gaps. Prior
research typically aim at devising instructional supports for KI for specific topics. However,
this thesis targets the improvement of learners’ cognitive processes of KI, which once improved,
may be applied in different topics, even if it has been acquired through another topic. Our broad
research objective is: “Designing and evaluating a technology-enhanced environment (TEL)
environment to improve learners’ cognitive processes associated with knowledge integration.”
Our solution is based on using exploratory question-posing (EQP) as a cognitive tool
for fostering cognitive processes of KI. EQP is a kind of question-posing wherein learners pose
questions with an aim to explore more knowledge around a given set of knowledge. We have
empirically found that to do EQP a learner needs to link knowledge pieces from the given new
knowledge and her/his prior knowledge. This means that linking leads to EQP. However, EQP
may further lead to more linking which can be considered as a positive feedback loop.
The primary research question that we answer in this thesis is: “How to employ EQP
in a TEL environment to improve students cognitive processes associated with KI in a Data
Structures course? Our field studies have been administered in a number of topics in the
domain of Data Structures. The target population of this research are first and second year
engineering undergraduates. The artifacts produced are applicable to the Data Structures and
We have used design based research (DBR) as our overall research framework. DBR is a
research methodology that aims at the development of educational interventions and/or learning
environments through iterative analysis, design, development, implementation, and evaluation.
The research activities are based on collaboration among researchers and practitioners in realworld
settings, and they lead to contextually-sensitive design principles and theories.
We have executed two research cycles of DBR where each of the cycles involved the
research activities of problem analysis, solution design, evaluations and reflections. By the
end of the two DBR cycles we designed, developed and evaluated the following: (1) An
EQP-based pedagogy known as “Inquiry-based Knowledge Integration Training (IKnowIT) -
pedagogy” and (2) A web-based self-learning environment known as “IKnowIT-environment”
as an operationalization of the IKnowIT-pedagogy. In addition to the IKnowIT-pedagogy and
the IKnowIT-environment the thesis contributes by: (1) Identifying the frequently-employed
EQP-strategies which explain how learners integrate different knowledge pieces to arrive at any
exploratory question in the domain of data structure and (2) Extracting local learning theories
that explain how learners engagement with the features of the IKnowIT-pedagogy, including
question-posing and EQP-strategies, lead to the improvement of cognitive processes of KI in
them. The results show that the designed IKnowIT-pedagogy successfully fosters learners’
cognitive processes of KI using EQP.
[July 2012 - April 2017 (defended April 2018)]
A model for large-scale, in-service teacher training on effective technology integration in engineering education:
[Guide: Prof. Sahana Murthy]
Thesis: PDF file [14.5 M].
Slides: PDF [6.9 M].
The proliferation of information and communication technologies (ICT) has led to its
widespread use in classrooms around the world in the last two decades. However for
improved student learning the focus of teaching-learning practice has to shift from
routine use of ICT for demo/display to effective ICT integration, that is, the
comprehensive process of applying ICT to the curriculum to improve teachinglearning,
that relies heavily on pedagogical design. Teacher professional development
(TPD) programmes that focus on pedagogy related to integration of ICT in classroom
to inform effective teaching practices are one way of providing this solution.
Two key issues related to TPDs in the Indian engineering education context are: (i)
Reliance on in-service in-service short-term training programmes (STTPs) and (ii)
Issue of large-scale. The number of in-service teachers existing within engineering
education is around 0.5 million, introducing the need for scalable TPD programmes.
Thus apart from the need for a good design, complexities may arise due to the scale.
Thus the broad problem statement of this thesis is: How to improve the design and
delivery of large-scale training programmes to in-service faculty in Indian
engineering education, to enable them in effectively integrating Information and
Communication Technology (ICT) tools within their teaching-learning context?
In order to address this problem, we have created the Attain-Align-Integrate-
Investigate (A2I2) model for designing of technology integration training
programmes. The A2I2 model has its theoretical basis on constructive alignment
(Biggs, 1996), and it utilizes spiral curriculum (Bruner, 1960) and active learning
(Prince, 2004) in its implementation. Design Based Implementation Research (DBIR)
approach formed the methodological basis of this research. This model was used to
design and implement five training programmes under the banner “Educational
Technology for Engineering Teachers” (ET4ET) that got implemented across three
different modes – face-to-face, blended online and massive open online mode. In line
with the DBIR approach, evaluation studies conducted in each iteration informed us
of the effectiveness of the training and also helped in refining the model. The
evaluations were done on the metrics of reaction, learning, behaviour, participation
rates while scaling and sustainability. Key results include (i) Participant teachers’
reporting attitude shift from teacher-centric to student-centric practices, (ii)
Participants’ showing increased perception of competency in the use of wikis,
screencast and visualizations within their practice, and (iii) Medium-term
sustainability of training benefits observed at the levels of teacher, student and
institution. The iterative refinement of the A2I2 model also resulted in three design
principles – Pertinency, Immersivity and Transfer of Ownership – that can be used to
scale and sustain TPD efforts.
[Jan 2012 - May 2017 (defended Oct 2017)]
Framework for generation and evaluation of assessment instruments:
[Co-guide: Dr. M Sasikumar]
Thesis: PDF file [7.3 M].
Slides: PPT [2.8 M].
Assessment is an integral part of instruction and it has a profound influence on what students’ study, how much they study and how effectively they study. The design of an assessment instrument (AI) is one of the major components of assessment process. If students are subjected to random and unfair AIs, then assessment may not serve the intended purpose. Hence, the quality of any educational assessment exercise depends on the quality of AI used. Process of evaluation of AI quality brings lot of subjectivity into it as the benchmark can vary from person to person. Manual evaluation of quality by considering all the parameters is a very cumbersome task.
With this background, the broad research objective of the thesis is “How to improve the quality of AI and how technology will assist in this process? There are two ways to improve the quality of AI, namely (i) Evaluating the quality of teacher generated AI and providing feedback or (ii) Automatically generate the AI, so that the quality is ensured at the time of creation itself. The first approach is the major focus of my thesis. Pursuing these two approaches resulted in the design of two frameworks for the generation and evaluation of AI.
There are many quality parameters of AI. Based on literature, the measure of alignment of AI against the Learning Objectives (LOs) of the course is adopted as the quality of AI in our work. Implementation of frameworks resulted in the development of tools, namely, Instrument Quality Evaluator (IQuE), Teacher Training Module (TTM) and AI Generator (AIGen).
IQuE measures the quality of AI in terms of its alignment with the LOs of the course. An ontology based Knowledge Representation (KR) mechanism is designed to integrate the contents of syllabus, LOs and AI. Content and cognitive level information are extracted from LOs and questions using simple Natural Language Processing (NLP) techniques. Measure of alignment is formulated based on the commonalities and differences in concepts covered and cognitive levels from LOs and questions respectively. IQuE provides two types of outputs; a numerical measure of alignment and its visual representation. It also estimates the utility of each question indicating its contribution towards LOs. Accuracy of IQuE was tested with large number of samples (N=1000) and the accuracy with respect to content and cognitive level alignment are 91.2% and 93.23% respectively.
Using IQuE, a TTM is developed that can be used to train teachers to write good assessment questions against given LOs. It has a multistage environment and is supported by a formative feedback mechanism that gives the feedback about the alignment of teacher written questions against a system displayed LOs.
We have built a prototype version of AIGen that facilitates automatic generation of AI from the teacher entered AI specification (AIS) and tagged question repository. In the preliminary investigation, it was found that the generated AIs were 80% compliant with the corresponding teacher entered AIS.
The context of the research work is AIs designed for written examinations in a typical university scenario in engineering curriculum. All the samples for the study are taken from the Data structures course in engineering curriculum.
[July 2010 - Dec 2017 (defended July 2018)]
Development of mental rotation skills using 3D visualization tool:
Thesis: PDF file [6.2 M].
Slides: PDF [3.0 M].
First-year undergraduate students in engineering often face difficulties in learning and solving the engineering drawing (ED) problems that require visualization of three-dimensional objects. Conventional methods of teaching ED require students to learn the course for a semester-long duration, and it involves them to do the practice of sketching and drawing for longer durations. Certainly, these teaching methods assist the learning of a subject but do not guarantee the elimination of the learning difficulties entirely (Akasah & Alias, 2010; Kosse, 2005).
In modern teaching methods, instructors make use of software tools such as computer-aided design (CAD), multimedia tutors, and web-based instructions as a supplementary visual aid for learning of ED concepts (Branoff & Mapson, 2009; Cincou, 2013; Jerz, 2002). These methods found to be useful in the teaching of the engineering drawing course and involve various additional activities like content presentations with voice-over, demonstrations of software, and videos of sketching. Despite the fact that these methods are useful in improving the learning of ED concepts and skills; certain difficulties remain (Kuang & Thomas, 2004). One of the potential reasons is students’ deficiency in visualizing spatial relationships, i.e., poor spatial skills.
This problem is significant in the context of engineering domain because ED is one of the fundamental courses for various engineering disciplines like Mechanical Engineering and Civil Engineering. Spatial skills, such as mental rotation (MR) play a major role in learning concepts that involve 3D visualization. Consequently, for the successful learning of such courses, students should be trained for the development of MR skill.
In order to address the issue, we have created a Blender-based training program ‘TIMeR’ to improve mental rotation skills of the students and hence the learning of relevant concepts. TIMeR stands for ‘Training to Improve Mental Rotation Skills’. TIMeR has three phases which are the preparatory phase, the training phase, and the transfer phase. Each phase has hands-on MR training task which is tightly coupled with the cognitive steps of mental rotation. Each task is executed using the instructional strategy of demo-drill-practice (DDP).
In this thesis, we present the work carried out in order to develop TIMeR and investigate its impact on the improvement of students’ MR skill. We also extended the work to the domain of engineering drawing and computer graphics. Total seven empirical research studies were carried out by applying mixed methods research design. The key results include (i) TIMeR found to be effective in the development of mental rotation skill, engineering drawing problem-solving performance and computer graphics problem-solving performance (ii) TIMeR helped students in resolving their engineering drawing learning difficulties (iii) TIMeR resulted as a workshop model and was successfully incorporated in engineering drawing course. The training structure would be useful for teachers to create their own mental rotation training program when the training objects are of three-dimensional nature.
[July 2010 - March 2016 (defended July 2016)]
A frameworks for scaffolding to teach programming to vernacular medium students:
Thesis: PDF file [5.2 M].
Students, who study in their primary language in K-12 and go on to do their undergraduate education in English, are known as vernacular medium students. Vernacular medium students face difficulty in acquiring programming knowledge in English medium of instructions (MoI). Solutions targeted towards improving their English proficiency take time while continue teaching in primary language MoI limits the students' ability to compete in a global market. The key challenge is in developing a framework that helps vernacular medium students to comprehend the educational content presented in English MoI. It will not only help them to develop content knowledge but increase English competency also.
In this thesis, we address the problem of primary language learners in learning computer programming in English MoI. In our solution approach, we first identify the problems of vernacular medium students from the literature review and then reconfirm them in Indian context using a qualitative study. We identified and tested language based scaffolds, cognitive scaffolds, and the environment in which these scaffolds work.
This thesis presents five research cycles which were used to identify, select and test the effectiveness of various scaffolds to teach programming to vernacular medium students. The research cycles produced five different prototypes. Each prototype use different set of scaffolds in learning material, learning environment and presentation. Learning material for each prototype was selected or designed to reduce the cognitive load of students and provide language-based scaffolds. We used two different educational environments, 1) classroom environment and 2) self-paced video-based learning environment to test our prototypes. Visualization guidelines to teach various content types and multimedia principles are followed to reduce the cognitive load of students.
Cognitive scaffolds to reduce intrinsic cognitive load are identified from instructional design principles and visualization guidelines to teach various educational content types. We found that instructional design principles help in writing learning objectives and decide prerequisites. Use of instructional design helps in the systematic planning of instructions that removes instructional gaps and help learners to comprehend the presented learning material by
reducing the intrinsic cognitive load. Visualization guidelines to teach various educational content types (e.g. fact, process, concept, procedure, and principle) helps in design instructions that reduce the intrinsic cognitive load of vernacular medium students. We selected and tested multimedia principles that reduce the extrinsic cognitive load of vernacular medium students. These principles are split-attention effect, segmentation, pre-training, synchronization, redundancy effect, verbal redundancy and attention cueing.
We identified several language based scaffolds that reduce the mental effort of a vernacular-medium student that are used to translate the educational content presented in English only MoI. These language-based scaffolds are 1) Use of simple English MoI, 2) Explain semi-specialized and specialized words on the first occurrence, 3) Use of slow pace for vocal explanation. We also identified language-based scaffolds for bilingual MoI when classroom based environment is used, these scaffolds are 1) Use of simple Hindi MoI for vocal explanation, 2) use of code-switching, 3) Use of English MoI for specialized and semi-specialized words.
We conducted two qualitative studies, and three quantitative studies to measure the effectiveness of various scaffolds. We used classroom based environment in two research cycles and self-paced video-based learning environment in three research cycles. We find that self-paced video-based environment is more suitable for vernacular medium students than a classroom environment if English only MoI are used.
The main contribution of this thesis is 1) identification of language-based scaffolds that help in comprehending the educational content presented in English only MoI and bilingual MoI 2) a framework that helps teachers to plan instructions to teach vernacular medium students based on various conditions 3) the selection of visualization guidelines and multimedia principles to provide cognitive scaffolds.
[July 2010 - Dec 2014 (defended June 2015)]
A problem-solving methodology based on the extremality principle and its application to CS education:
Thesis: PDF file [1.3 M].
Slides: PDF [<1 M].
Extremality principle is a commonly used problem-solving strategy. The main idea is to
look at extremal cases of a problem in order to obtain insight about the general structure.
Though the principle is widely known, it is not explicitly discussed in algorithm textbooks
or taught in courses. We devise a methodology based on extremality principle and use it
to solve a variety of algorithmic problems involving graphs. We apply the methodology to
three tasks that are relevant in the context of teaching algorithms.
The first task is related to the analysis of greedy algorithms. We are given an optimization
problem P and a greedy algorithm that is purported to solve the problem P. The goal is
to construct an instance of P on which the algorithm fails to give the correct answer. Such
an instance is called a counterexample. In a pilot experiment, we found that students had
difficulty in constructing counterexamples. We give a method to construct counterexamples
for many graph-theoretic problems. We apply our method to standard problems in graph
theory for which greedy algorithms exist in literature.
The second task is related to proving lower bounds on a query computational model.
We are given a graph G as input via its adjacency matrix A. We are also given a graph
property (like connectivity). The goal is to determine the minimum number of entries of A
that one needs to probe in order to check if G has the given property. We prove that if the
graph has n nodes, then for many common properties at least
(n2) probes are necessary.
We find that lower bound proofs of this problem that are given in textbooks rely too much
on connectivity property and do not generalize well. We give a method to prove lower
bounds in a systematic way. In a pilot experiment, we found that students were able to
understand and apply our method.
In the third task we use our methodology to solve a research problem. We give a
detailed account of the problem-solving process that could be of expository value.
We present preliminary work to show the use of extremality in linear programming
[Jan 2010 - April 2014 (defended August 2014)]
Enriching the student model in an intelligent tutoring system:
Registered with: IITB-Monash Academy;
[Co-guide: Prof. Sahana Murthy];
[Monash Supervisors: Prof. Campbell Wilson; Prof. Judy Sheard]
Thesis: PDF file [3.2 M].
Slides: PDF [2.3 M].
An intelligent tutoring system is a computer-based self-learning system which provides personalized learning content to students based on their needs and preferences. The importance of a students' affective component in learning has motivated adaptive ITS to include learners' affective states in their student models.
Learner-centered emotions such as frustration, boredom, and confusion are considered in computer learning environments like ITS instead of other basic emotions such as happiness, sadness and fear. In our research we detect and respond to students' frustration while they interact with an ITS.
The existing approaches used to identify affective states include human observation, self-reporting, modeling affective states, face-based emotion recognition systems, and analyzing data from physical and physiological sensors. Among these, data-mining approaches and affective state modeling are feasible for the large scale deployment of ITS.
Systems using data-mining approaches to detect frustration have reported high accuracy, while systems that detect frustration by modeling affective states not only detect a student's affective state but also the reason for that state. In our approach we combine these approaches. We begin with the theoretical definition of frustration, and operationalize it as a linear regression model by selecting and appropriately combining features from log file data. We respond to students' frustration by displaying messages which motivate students to continue the session instead of getting more frustrated. These messages were created to praise the student's effort, attribute the results to external factors, to show sympathy for failure and to get feedback from the students. The messages were displayed based on the reasons for frustration.
We have implemented our research in Mindspark, which is a mathematics ITS with a large scale deployment, developed by Educational Initiatives, India. The facial observations of students were collected using human observers, in order to build a ground truth database for training and validating the frustration model. We used 932 facial observations data from 27 students to create and validate our frustration model. Our approach shows comparable results to existing data-mining approaches and also with approaches that model the reasons for the students' frustration. Our approach to responding to frustration was implemented in three schools in India. Data from 188 students from the three schools, collected across two weeks was used for our analysis. The number of frustration instances per session after implementing our approach were analyzed. Our approach to responding to frustration reduced the frustration instances statistically significantly (p < 0.05) in Mindspark sessions.
We then generalized our theory-driven approach to detect other affective states. Our generalized theory-driven approach was used to create a boredom model which detects students' boredom while they interact with an ITS. The process shows that our theory-driven approach is generalizable to model not only frustration but also to model other affective states.
[July 2009 - Dec 2013 (defended September 2015)]
Design considerations for creating e-learning animations:
Registered with: YCMOU; Worked at IIT Bombay;
[Co-guide: Prof. Sahana Murthy]
Thesis: PDF file [36.2 M].
Slides: PDF [12.7 M].
Learning object (LO) is a smallest independent structural experience that contains
an objective, a learning activity and an assessment. Usability of LOs is important
for creating effective eLearning content. Major stakeholders participating in a typical
LO creation process are the subject matter experts (SMEs), instructional design (ID)
writers and animators. Literature suggests that constant communication between
these stakeholders, assisted with good documentation results in usable LOs. However,
this ideal scenario is difficult to implement, owing to scarcity of experienced personnel,
and limited availability of these experts. Templates are used at respective stages to
achieve systematic communication in the LO creation process. One of the key problem
that still persists is the miscommunication between ID writers and animators. Faceto-face
interaction between ID writers and animators is a popular solution applied to
address this problem. However, having unlimited face-to-face interaction as and when
required, is not a scalable option.
The focus of this research is on this communication between ID writers and the
animators, since this is the stage where the textual+verbal information is translated
in to visuals. We start off by analyzing the feedback from the animators, which reveals
lack of visual communication (VC) aspects in the ID documents received. The solution
approach used in this research is to modify ID template (IDT) by operationalizing
principles from VC domains (graphic design, multimedia, interaction design and animation).
Design based research (DBR) is used for conducting research cycles (RCs) to
modify the IDT. In the initial RCs, ID writers (n=16) and animators (n=15) validate
the versions of IDTs until a version is found usable.
Multiple experiments are conducted with students (n=128) and SMEs who compare
usability of two types of LOs. First type is created using IDT having VC principles
and the other type is created without the VC principles. Results show that the LOs
created using VC principles are more usable, as compared to the LOs created without
them (p=0.000 and z=0.016). Unstructured interviews with SMEs reveal that the LOs
created with the VC principles, are not only usable, but these LOs are engaging and
[July 2003 - Oct 2008 (defended June 2009)]
Adaptive mechanisms for multimedia dissemination:
Thesis: PDF file [1.5 M].
Slides: PPT [3.8 M].
Delay-tolerant multimedia applications, where clients specify the time when playout must start,
fit the profile of many emerging applications, including distance education and corporate training.
Such applications typically depend on a Content Delivery Network (CDN), where the
Content Service Provider (CSP) disseminates multimedia content to geographically dispersed
clients through a distribution network from servers located on the CDN. In these applications,
a client Ci connects to the source S at time t0 and requests for contents with base encoding rate
Γ and playout duration T ; Ci specifies the time it is willing to wait for the playout to start, its
delay tolerance δi ; Playout at Ci starts at (t0 +δi ) and ends at ((t0 + δi ) + T ). Note that t0 may
also be the start time of a scheduled streaming session.
This thesis deals with the issue of maximizing the quality of the multimedia contents delivered at the
clients even when link bandwidths are constrained in the path from the source to the clients.
To achieve this, we develop a suite of algorithms that can exploit clients’ delay tolerance considering the following parameters:
(i) Service type: whether contents are streamed according to a schedule or occur on-demand and
(ii) Bandwidth: whether link capacity is static or variable, using appropriate resources at the nodes –
Transcoders, Layer encoders, or Streaming servers with transcoding or layering functionality.
In particular, we consider the following three cases: (i) Scheduled streaming when bandwidths are static,
(ii) Scheduled streaming when bandwidths are varying and predicted, and (iii) On-demand streaming when bandwidths are static.
The algorithms developed are a result of formulating the objectives in an optimization framework or by an analysis of
properties of the network topology and client requirements. Furthermore, where an optimal algorithm is computationally expensive,
we develop algorithms based on heuristics as practical alternatives. The approaches are validated through extensive simulations,
using topologies derived from real-world scenarios.
The algorithms developed in this thesis would help a CSP serving clients in a subscription based network in:
(i) improving the quality of reception at the clients by leveraging their delay tolerance values,
(ii) estimating the resources required to provide the best delivered rates to clients, and
(iii) determining placement of resources to maximize the delivered rates at clients.
Thus, using the analysis presented and algorithms developed in this thesis, a CSP can deploy resources in order to
ensure effective quality of service to clients and efficient resource utilization by leveraging clients’ delay tolerance.
[Jan 2003 - 2009; (defended July 2009)]
Design of multi-tier wireless mesh networks:
Thesis: PDF file [2.7 M].
Slides: PDF [2.7 M].
In this thesis, we investigate the issue of automated design of capacity-constrained Wireless Mesh Networks (WMN).
We argue for the necessity of applying network design methodologies from wired and cellular network fields in
wireless network design scenarios and present algorithms for Wireless Local Area Networks (WLANs) and backbone topology design.
The deployment scenario we envision is a campus of office buildings requiring wireless connectivity.
The client nodes to be deployed in each office, their application traffic requirements and the deployment layout are given.
We identify three main stages in the design of such wireless networks: 1) Association of clients to access points,
2) WLAN topology construction and 3) Backbone topology construction.
Capacity provisioning and network cost minimisation are the two constraints imposed on the design problem.
In the first stage, we define the AP-assignment problem, that is, the problem of associating client nodes with the
nearest Access Point (AP) and investigate various access point association scenarios.
We compute the performance of 802.11 WLANs under homogeneous realtime application deployments with theoretical and OPNET simulation results
for various voice and video codecs. We capture the performance, in terms of number of flows supported for an application,
as the capacity of a WLAN. We then examine heterogeneous application deployments and show the inability of
802.11 DCF mechanism to handle them. We propose an extended DCF for handling scenarios where delay-sensitive and delay-tolerant applications
are deployed together.
Next, we propose a novel approach called sub-optimal AP-assignment and show that the utilisation of 802.11 DCF can be increased by up to 75\%.
We show how the solutions to the AP-assignment problem can then be used for abstract representations of the 802.11 DCF MAC in wireless design problems.
In the second stage, we define a network design problem for constructing WLAN topologies.
The scenario we consider is intra-office connectivity for client nodes. We present a recursive bottom-up algorithm for
capacity-constrained topology construction. The topology construction algorithm considers the deployment scenario,
the client nodes deployed and their application scenarios as inputs. The AP-assignment solutions are used as a construction mechanism
in the form of link specification functions. We introduce a new object, called composite unit, as an abstract building block for
network topology construction. The generated topology is then validated with simulations (OPNET Modeler).
In the third stage, we define a wireless mesh network (WMN) design problem for constructing a mesh topology for a campus-like scenario.
The design problem is defined as a case of traditional network design problem for optimal node location and topology construction.
These problem form Mixed Integer Linear Programming (MILP) formulations and are solved with an MILP solver (CPLEX).
We have built a tool to implement our multi-tier wireless design solution. The tool, the Wireless Infrastructure Deployment tool (WIND),
designs topologies for both WLANs as well as WMNs. WIND takes information about nodes deployed, their properties and deployment
layouts to construct logical network topologies. The input parameters and output topology use XML schemas compatible with
data formats of the OPNET Modeler network simulator. This allows the topology to be input directly to the simulator for validation.
Using simulation we show that the constructed topologies satisfy the given constraints on application scenarios,
protocols and deployment scenario. We present case studies of constructing topologies for both WLAN and WMN scenarios.
[Jan 2002 - Jun 2006 (defended Dec 2006)]
Cross layer feedback architecture for mobile device protocol stacks:
Thesis: PDF file [1.0 M].
Slides: PDF [1.3 M].
Applications using traditional protocol stacks (for example, TCP/IP) from wired networks
do not function efficiently in mobile wireless scenarios. This is primarily due to the layered architecture and implementation of protocol stacks.
Cross layer feedback is one of the mechanisms to improve the performance of a layered
stack in mobile wireless environments. For example, transport layer retransmissions could
be reduced by making it aware of network disconnections or hand-off events. One such
optimization is Receiver Window Control.
Since the protocol stack is an integral part of the operating system, any cross layer
modification to the stack should not impact its correctness, efficiency and maintainability.
An appropriate architecture would help ensure that cross layer modifications confirm to
We define the design goals for a cross layer architecture based on our study of the
pros and cons of existing approaches to cross layer feedback. We present our architecture
ECLAIR which addresses these design goals. In ECLAIR we exploit the fact that
stack behavior is determined by the values stored in various protocol data structures.
Our architecture facilitates easy manipulation of these values stored in the protocol data
structures. ECLAIR requires minimal or no modification to the existing protocol stack.
We validate and evaluate ECLAIR through a prototype implementation, of receiver
window control, and experiments. To evaluate a cross layer architecture we identify metrics
for evaluation against each of the design goals. We identify time and space overhead,
user-kernel crossing, data path delay, number of changes to protocol stack and number of
changes to cross layer optimization as metrics for the design goals. Our results and analyses show that ECLAIR is an efficient cross layer architecture and is easily maintainable.
To further enhance the efficiency of ECLAIR we propose a core sub-architecture.
Finally, we also present a design guide for cross layer optimizations using ECLAIR.
[Jul 2001- Apr 2007 (defended July 2008)]
Connectivity properties for Topology design in sparse multi-hop wireless networks:
Thesis: PDF file [840 K].
Slides: PPT [670 K].
Multi-hop Wireless Networks (MWNs) are decentralised, infrastructure-less networks enabled by
cooperative multi-hop routing among the participating nodes. In this work, we deal with topology
design with respect to connectivity properties for sparse MWNs.
In existing work, MWN topology design has primarily focused on one metric: connectivity.
Connectivity is the probability that all the nodes of a network form a single connected component.
Most related work consists of asymptotic analyses dealing with finding the values of network
parameters that ensure that the MWN is connected with high probability. The parameters defining
the network are usually the number of nodes, their transmission ranges, and the dimensions of the
In this work, we deal with sparse MWNs, which are unlikely to be completely connected. We
argue that sparse networks can form during the functioning of MWNs, and further, that networks
can be designed to be sparse in order to facilitate tradeoffs between network parameters. Since
much existing work on connectivity is asymptotic, and since it focuses only on the operating point
at which the network becomes connected, we provide a finite-domain, empirical model for connectivity.
However, we find that connectivity is not ideal for dealing with sparse MWNs because
it is i) not indicative of the extent to which the network supports communication; and ii) it is
unresponsive to fine changes in network parameters. We introduce a connectivity property called
reachability, defined as the fraction of connected node pairs in the network, which we claim is
more appropriate for topology design in sparse MWNs. We define and prove properties of reachability,
and illustrate its application in performing fine-grained tradeoffs in network parameters
through a case study. We also provide a finite-domain, empirical characterisation of reachability,
and a tool called Spanner (Sparse Network Planner) to help apply this model. Given three values
from side of the deployment area, number of nodes, uniform transmission range of the nodes,
and reachability, Spanner computes the fourth. Our empirical charecterisations of connectivity
and reachability are for static networks with up to 500 nodes uniformly distributed at random in a
square area. These are also applicable to networks with mobile nodes where the mobility model
preserves the uniform distribution of nodes.
Much work in the area, including our characterisations of connectivity and reachability, are
for networks operating in a square area of deployment. We show that results obtained for a square
area do not necessarily apply even to similar rectangular areas. We ascribe this to the edge effect
by which nodes located near the boundaries of the area of operation cannot utilise their entire
transmission coverage for communication. We quantify analytically the expected coverage for a
single node in a rectangle and describe how this can be applied in extending results obtained for
square areas to rectangular areas.
We have also developed a simulator, Simran, for studying topological properties of MWNs.
Simran takes as input a scenario file with initial positions and movement traces of nodes, and generates
a trace file containing metrics of topological interest such as average number of neighbors,
averaged shortest path lengths over all pairs of nodes, reachability, connectivity, and number and
size of connected components.
[Jan 2001 - Aug 2006 (defended July 2007)]
BoBs: Breakable objects - Building blocks for flexibile application architectures:
Thesis: PDF file (old version) [3.6 M].
Slides: PDF [870 K].
Computing systems are becoming increasingly varied in terms of computing and
networking capabilities. A given application may often need to be deployed in diverse scenarios.
Additionally, a software needs to evolve continuously, sometimes in
disparate directions, to suit a diverse portfolio of user requirements.
However, given an application defined for one scenario, it may not be possible to
simply refactor the application for a direct reuse or an easy adaptation to another
scenario. The application itself may need to be redesigned to enable such flexibility.
Hence a pertinent problem is - How do we design and implement a software system
such that it has a lesser compulsion to be redesigned, and has a greater flexibility to
In this thesis we propose a novel way of constructing such applications - wherein
an application is built using Breakable Objects or BoBs. BoBs are similar to the
traditional objects or components, but have the property that they can be readily
broken into sub-objects or sub-components. As such, BoBs naturally form the building
blocks for flexible application architectures. We term this architecture as BODA
- Breakable Object Driven Architecture. We claim that: (i) BODA provides an
architecturally robust mechanism for flexible fine grained reuse, and (ii) BODA greatly
facilitates automatic application translations for various deployment scenarios.
We apply BODA in the context of object-oriented programming systems. We
present a programming model for BoBs in Java called JavaBoB . JavaBoB is a subset
of Java language specification, and contains some small extensions to the present lan-
guage. We also present mechanisms by which BoBs can be composed, decomposed and
used in an application. Furthermore, the BODA process is illustrated and evaluated
in the thesis using three main case-studies and some distilled-data from similar other
CarromTutor: Playing strategies and implementation:
Thesis: PDF file [4.8 M].
Slides: PDF [2.4 M].
With the advent of new technologies and widespread use of Internet, usage of tutoring
systems is increasing day by day. Online Tutors help in learning with flexibility and
comfort to improve user’s confidence. Game based Carrom Tutor is a tutoring environment
aimed at teaching various Carrom skills and strategies to Carrom aspirants. We built
two systems, Carrom Tutor 1.0 and Carrom Tutor 2.0. Carrom tutor 1.0 is a web based
system which teaches carrom skills and test user’s knowledge. Carrom Tutor 2.0 provides
a game environment, using Blender 3D, for learning carrom skills. While designing these
tutors we took into account perspectives of educational technology, Game-Based learning,
and software and user interface design.
In this report, we discuss different Carrom skills and strategies. Then we describe the
design, implementation and user experiments of Carrom Tutor 1.0. Then we present the
detailed design of Carrom Tutor 2.0, followed by its evaluation. We demonstrate the
effectiveness of our system by discussing the results of user experiment conducted with
respect to another carrom application. Finally, we discuss the challenges faced while
building the tutor and implications of our work for future development of the system.
CarromTutor: Game-based learning and implementation:
Thesis: PDF file [3.7 M].
Slides: PDF [2.4 M].
Rapid improvement of technology and education increase the necessity of potent tutors
day by day. Tutoring can be done utilizing the web or using the traditional method. In
this report, we describe the development of a web-based Carrom Tutor, and a game-based
Carrorm Tutor implemented in Blender 3D. We refer to these Tutors as Carrom Tutor 1.0
and Carrom Tutor 2.0, respectively. As the title suggests, these Tutors are initiatives for
teaching Carrom skills and strategies to the Carrom aspirants. While designing the tutors,
Educational Technology, Game Based Learning and Software designing perspectives have
At the beginning of the report, principles of Game-Based Learning and Educational
Game Design are discussed. The next chapter is about Carrom Tutor 1.0, it’s design,
implementation, demonstration and user experiments Then the report covers the moti-
vation behind building Carrom Tutor 2.0, and provides detailed descriptions about Car-
rom Tutor 2.0’s design, implementation, demonstration, user experiments and challenges.
Technological tools and platforms considered for implementation, along with details of
actual implementation have been described. Finally, the experiments conducted to test
the effectiveness of these systems are described, along with possible future works.
GATutor: Greedy algorithms tutor using guided-discovery learning approach:
Thesis: PDF file [3.0 M].
Slides: PPT [6.0 M].
Algorithms are a core of Computer Science. While there are established
good material and visualizations to learn about Algorithms, it is important
to devise ways of active learning of algorithms through visualization. Greedy
algorithms are an important class of algorithms, and it is useful to develop an
effective teaching strategy for them. We have created a framework for effective
teaching of greedy algorithms based on guided discovery learning approach.
We implemented the framework as an system named GATutor, incorporating
principles of Education Technology, software design issues, and a user friendly
interface. We have constructed learning material for four areas where greedy
algorithms can be applied to find the optimal solution.
In this report, we have initially discussed about our framework, its design
decisions, and how it will be helpful in effective learning. Later we have de-
scribed the design of the GATutor system, implementation, and challenges
faced. Finally we have tried to prove the efficacy of the system by conduct-
ing learning, attractiveness and usability evaluation of the system along with
GATutor: Intelligent tutoring system for greedy algorithms:
Thesis: PDF file [2.3 M].
Slides: PPT [6.0 M].
Algorithms are an important part of computer science course which have
widespread applications. Given their importance it has become necessary to
teach and learn them with sound clarity. There have been many attempts
in the past to do so; primarily with animation. With many computer based
tutoring systems already there, we have build an Intelligent Tutoring System
for effective teaching and learning of Greedy Algorithms-GATutor. For this
we also devised a general framework of teaching which can be applied to any
Greedy Algorithm using guided discovery learning principles. System makes
the user to develop an algorithm themselves by providing them stimulating
questions and timely hints to real life scenarios. Users’ data is analyzed by
the system which then provides insights to the teacher.
In this report, we have first describe various other systems for teaching of
algorithms. Then we explain our framework for teaching greedy algorithms.
After that we explain in detail the software design of our system, its module
details and the challenges faced in building them. Finally we describe the
system evaluation based on usability, learning and attractiveness along with
the future work.
Moodle plugin for Socratic Questioning:
Thesis: PDF file [2.2 M].
Slides: PDF [2.0 M].
Demos: Demo1 [28.2 M], Demo2 [14.3 M], Demo3 [5.9 M], Demo4 [4.6 M].
Web based distance education is used in higher education as a means of
providing knowledge as a teacher to different community of individuals.
Teaching a subject to a number of students is always a challenge for the
teacher. Teachers uses many teaching strategies to teach a subject student
to students. But when it comes to a computer based system it becomes
very difficult. So providing an effective Intelligent Tutoring System (ITS)
Framework is a challenge.
While many approaches exist for solving this problem, I select Socratic
Questioning teaching strategy. I have developed a moodle plugin to im-
plement this teaching strategy, which will help the teachers to teach their
course using Socratic Questioning method in distance learning environ-
ment. This plugin provides functionality to create quiz using Socratic
Questioning strategies. Then I have discussed functionality of moodle plu-
gin developed by me which will be used as an activity in moodle, followed
by implementation of this plugin, and challenges in integrating my plugin
Moodle plugin for Game-based Learning:
Thesis: PDF file [2.4 M].
Slides: PDF [2.4 M].
Demos: Demo1 [1.2 M], Demo2 [1.3 M], Demo3 [3.9 M], Demo4 [1.2 M].
Importance of change in teaching and evaluation methods is required these days because of
significant growth of Internet. I explored and proposed why game based learning is important to
current methods of teaching. Because of the Internet many students spends their most of the time
on playing online games, I decided to develop these games which can be used to motivate and
increase involvement of students in learning activities. Moodle platform is used by many reputed
educational institutes for managing their curriculum. I developed four educational games as a
plugin on moodle to motivate and create more interest in learning activities among the student
community. Details of each game including motivation and benefits has been discussed in this
Development of ITS framework: Using scaffolding strategy
Chandra Pal Singh
Thesis: PDF file [1.7 M].
Slides: PDF [1.0 M].
Demos: Demo1 [43.3 M], Demo2 [78.6 M], Demo3 [68.4 M].
Over the past decade, distance education programs have developed at an extraordinary rate. Web-
based distance education has emerged in higher education as a means for providing a variety of
educational opportunities to a diverse community of individuals. Teaching programming to a
student is always a challenge for the teacher. Teachers used many teaching strategies to teach a
student. But when it comes to a computer based system it becomes very difficult. So delivering
an effective Intelligent Tutoring System (ITS) is a challenge. While many approaches exist for
solving this problem, this project is to developed an ITS framework to incorporate more than
one teaching strategy at one place. The purpose of this project was to develop an interactive and
adaptive (Intelligent Tutoring System framework) for the computer science under-graduate 1st
year students for teaching programming languages.
In this effort design and architecture of our ITS framework is explained. We build an ITS
from this framework. ITSs are required to follow at least one teaching-learning strategy to
achieve the learning objectives. Our ITS follows four teaching strategies. Scaffolding teach-
ing strategy is one of these strategies. Scaffolding is a teaching-learning strategy in which the
instructor provides a temporary support to the student so that student will be able to do the ques-
tion which he cannot do without help. This ITS can be used to teach multiple subjects.
Development of ITS framework: Using socratic questioning strategy
Thesis: PDF file [ ].
Slides: PDF [1.9 M].
Demos: Demo1 [63.4 M].
An Intelligent Tutoring System (ITS) is a computer based instruction system that is used to
teach a student with minimal interaction of human tutor. We did a literature survey of ITS
like Wayang Outpost, SQL-tutor, and Thermo-tutor, to understand their architecture.
Most of the existing ITS support only one type of teaching-learning strategy or are limited to
one subject only. We hve built a framekwork for development of ITS that can support multiple
teaching strategies. In this report, I have discussed ITS framework for a strategy called
Socratic Questioning. Socratic Questioning is a teaching-learning strategy composed of 4 steps:
(i) Ask Question, (ii) Wait for response, (iii) Take response, (iv) Ask next question based on response.
Scaffolding, Guided Discovery and Game-based learning, were implemented by my fellow students -
Chandrapal Singh, M. Rajashekhar and Praveen Dhanala, respectively. In the last section, we have
discussed howe we have integrated these different teaching strategies into one system.
Development of ITS framework: Using guided-discovery learning
M Raja Shekhar
Thesis: PDF file [2.5 M].
Slides: PDF [1.9 M].
Demos: Demo1 [20.3 M].
On-line learning and distance education are significantly gaining importance in the recent
past, which demands the development of on-line tutoring systems. Merely presenting the con-
tent to the learner will not serve the tutoring purpose. The tutoring system should adapt itself
to learner’s subject knowledge. Tutoring becomes effective, if it includes hands on activities
and multiple ways of teaching. We present the details of a tutoring system framework that
adapts to learners subject knowledge and supports 4 teaching styles namely guided discovery,
socratic questioning(Q & A), scaffolding and game based learning. We are team of 4 members
working on the development of this framework. In this report, we explain the ITS frame work
using guided discovery teaching style.
Development of ITS framework: Using game-based learning
Thesis: PDF file [2.4 M].
Slides: PDF [1.9 M].
Demos: Demo1 [3.9 M], Demo2 [7.3 M].
A lot of research and development is happening in the past few years on the use of
computers as teaching tools. While many tutoring systems already exist, building an
effective Intelligent Tutoring System is a challenge.
We have built a framework for development of Intelligent Tutoring Systems (ITS)
that can support multiple teaching strategies. In this report, I have discussed ITS
framework for a strategy called Game-based learning. Game-based learning is teaching-
learning strategy composed of software applications or products that use games for
learning or educational purposes. In an educational game, the instructional content
is blurred with game characteristics.
We have implemented the system using PHP and MYSQL as the database model.
Also an Android app is built to bring the learning process into the hands of a student.
The other 3 strategies: Scaffolding, Socratic Questioning and Guided Discovery,
were implemented by my fellow-students - Chandrapal Singh, Vikash Kumar and
M.Rajashekhar - respectively. In the last section we have discussed how we have
integrated these 4 different teaching strategies into one system.
Automated construction of domain ontologies from lecture notes:
Thesis: PDF file [3.2 M].
Slides: PDF [2.2 M].
Nowadays e-learning has become popular, especially with the availability of large course-ware
repositories such as MIT’s OCW, NPTEL and CDEEP. A variety of searching techniques, e-
learning tools and systems are also available. Courseware repositories contain large amounts of
lecture videos and text. When searching for lecture material on a given topic, it would be useful
if the repository also indicates the topics that are pre-requisites. However, suppose a user wants
to learn about a particular topic of a subject, the search tools typically return a large number of
links to the user in response to his/her query (topic). Many of these are not directly related to the
topic. Some of them are more advanced topics and some other links contains some irrelevant
data which is nothing to do with the desired topic, so the user does not know which links to
follow in order to enhance his knowledge.
In this paper we present a technique that automatically constructs the ontology (dependency
graph) from the given lecture notes. We show how this ontology can be used to identify the
pre-requisites and follow-up modules for a given query (lecture topic). We also provide the user
with a dependency graph which gives a conceptual view of the domain. Our system extracts
the concepts using “term frequency inverse document frequency (tf-idf) weighting scheme” and
then determines the associations among concepts using “apriori algorithm”. We have evalu-
ated our system by comparing its results with the dependencies determined by an expert in the
Automated tagging to enable fine-grained browsing of lecture videos:
K Vijaya Kumar
Thesis: PDF file [3.1 M].
Slides: PDF [4.1 M].
Many universities offer distance learning by recording classroom lectures and making them accessible to remote students over the Internet. A university's repository usually contains hundreds of such lecture videos. Each lecture video is typically an hour's duration and is often monolithic. It is cumbersome for students to search through an entire video, or across many videos, in order to find portions of their immediate interest. It is desirable to have a system that takes user-given keywords as a query and provides links to not only the corresponding lecture videos but also to the section within the video. In order to do this, lecture videos are sometimes tagged with meta-data to enable easy identification of the different sections. However, such tagging is often done manually and is a time-consuming process.
We proposed an approach to automatically generate tags for lecture videos and make lecture videos easily accessible. The approach is based on extracting textual information from lecture videos and indexing the text data to provide search features in lecture video repository. We used Automatic Speech Recognition (ASR) and optical character recognition (OCR) technologies for the content extraction. In user interface, our system provides links to slides of a lecture. When a slide link is clicked, video also seeks to location where exactly the slide is discussed in that video and starts playing from that part. For automatically generating the slide index, we proposed an algorithm based on slide title matching which uses OCR. Following the approach and using open source tools mentioned here, a lecture video repository can provide features for its users to access content required by them easily.
We used open source Java libraries available for speech recognition and text search purposes. We designed a generalized framework for creating language models which are required during automatic speech recognition process. We performed experiments to test the performance of our system and achieved a recall of 0.72 and an average precision of 0.87 as video retrieval results. We also tested the performance of our slide detection algorithm and on an average it is able to detect 95 percent of the slides of a lecture.
Problem-based-learning tool as a plugin for Moodle:
Thesis: PDF file [3.5 M].
Slides: PDF [500 K].
Problem Based Learning (PBL) is a student centric teaching-learning strategy. In PBL students solve a problem or problems in a group to achieve the learning objective(LO). Many research have shown that PBL is a very effective instructional strategy, particularly for the educational field like Medical and Engineering, where students need to apply their knowledge to solve real life problems. But the implementation of PBL is time-consuming and sometimes student and facilitator both find it hard to begin with.
Assessment, communication, judgement of student progress are also difficult part for successful outcome in a PBL course. So a well-structured platform is required to support the whole process of a PBL course. Existing Learning Man- agement Systems (LMS) or Course Management System (CMS) like Moodle can be used to manage PBL courses, but these LMSs are very general. So to get more effective results a tool developed based on the pedagogical philosophy of PBL is needed. As these LMSs are highly used by the universities, schools, other organizations and some of the existing features of these LMSs can be reused, it is better to build the PBL tool as an add-on for an existing LMS.
Thus, to support PBL we have developed a plug-in for Moodle as an activity module. For assessment purpose another module named Rubrics is created. In this report different features of existing LMSs has been explored, different steps of PBL has been described, internal structure and features of Moodle are depicted. We also proposed a system which can support each of the steps of PBL and finally we described the user documentation and developer documentation of the PBL and Rubrics module.
Interactive tutoring system for high-school geometry:
Thesis: PDF file [1.6 M].
Slides: PDF [1.7 M].
Most tutoring systems and self learning software are restricted to objective type
questions for assessment and evaluation. However, objective type questions are not
very effective in assessing the students’ ability to solve proof problems in mathematics.
The purpose of this project is to develop an interactive proof module which guides
high school students while solving proof type problems in mathematics. The content
creator enters the problems and the acceptable solutions into the system. The student
proves a problem by assembling the statements of a proof from a stack of options.
The system compares the proof assembled by the student with the model solutions
at each step and gives appropriate hints and feedback.
Study-element based adaptation of lecture videos to mobile devices:
Ganesh Narayana Murthy
Thesis: PDF file [2.1 M].
Slides: PPT [1.2 M].
Lecture videos access on mobile devices like mobile phones and PDAs, would prove beneficial
to students, as it would provide quick and anywhere access of information. CDEEP
lecture videos have a high video bit-rate that makes viewing of this video unsuitable,
over low network bandwidth connections like GPRS, which is the network mobile devices
generally use. Further, such networks charge the users by the amount of data transferred.
The aim of this thesis is to adapt lecture videos of .Centre for Distance Engineering
and Education Programme.(CDEEP) of IIT Bombay, so that they can be viewed on
a mobile device at low network bandwidth and low cost. Transcoding i.e. converting
to another format is the most common method. Its efficiency at low-bit rates has been
Content-Based Adaptation is a way to adapt videos based on the content present in
the videos. A few examples of this methodology is discussed. A new method based on
this adaptation methodology, called .Study-Element Based Adaptation. that focusses
on dividing the video into study-elements, and then adapting the video based on each
element, is defined and introduced. Its efficiency at low-bit rates in terms of total
cost incurred by the user and the user experience is explored, and compared with the
transcoding way of adaptation.
Identifying study-elements in the video requires tagging of the video. Image Processing
based tagging that has been developed is explained, and its accuracy measured.
Finally, an analysis of what future improvements can be made to the study-element
based methodology has been discussed.
Adaptation of applets and videos to mobile devices:
Thesis: PDF file [630 K].
Slides: PPT [1.0 M].
Many educational organizations provide E-learning content which is in the form of
animations and lecture videos. Showing these content on mobile devices has many problems.
Mobile devices work on J2ME technology which is incompatible for showing these
animations in the form of Applets, where as Streaming of Live videos on mobile devices
is difficult because mobiles devices have limited memory resources. Also Live videos
require large network bandwidth, so they cannot be shown over mobile device connections
with low network bandwidth.
The aim of this thesis is to adapt animations and live videos to mobile devices.
Different approaches for porting Java Applets on to Mobile devices like J2ME MIDlets
and flash files have been explored with their relative merits and demerits. It also presents
the differences in terms of feasibility, complexity in converting J2SE Applets in to J2ME
Similarly different possible ways to adapt live videos have been explored. An
implementation of Live Video Streaming of CDEEP lecture video’s in a cost effective and
adaptive way for mobile devices has been provided, which is integrated into the system
proposed by the supporting thesis named study element based adaptation.
Unicast-Multicast gateway for tunneling lectures from any CDEEP studio to VSAT network:
[Guide: Prof. Puru Kulkarni]
Thesis: PDF file [1.9 M].
Slides: PPT [1.1 M].
Satellite communication networks today are used for educational purposes. These
networks can become vital if they can be used to reach out to the remote areas of
our country. Institutions interested in joining the educational programmes through
the satellite must setup a satellite transceiver. The satellite network setup is available
only to place of the setup. So the courses which are to be transmitted over the
satellite have to be captured live where the satellite network is available. The courses
being conducted in other blocks or departments, even though they are connected by
the campus network, cannot be transmitted through the satellite as they are two
separate networks. So professors at other department have to come down to where
the satellite link is available and give their lecture. We discuss the ways to design and
implement a solution to interconnect the satellite network and the campus network
so that courses conducted anywhere in the network can be transmitted through the
satellite. One of the main concerns here is that the satellite communication networks
are multicast whereas the campus networks may not be multicast aware. Because of
this campus network cannot be directly plugged into satellite network as some kind
of protocol conversion might be required.
An extensive study of interconnecting the satellite network and the campus network
is made. Here we specifically target EDUSAT satellite network used by CDEEP
(Centre for Distance Engineering Education Programme) and the campus network at
IIT Bombay. The aim is to find ways to channel the data transmitted and received
on the satellite network through the campus network with ease and making the least
changes to the existing setup. We finally figure out that the tunneling method is
found to be highly reliable and most easy to setup for CDEEP. This method was
tested on live EDUSAT satellite network and quantitative analysis on performance
of tunneling is also made.
Performance Analysis Of Live Streaming Using Content Distribution Algorithms for CDEEP Webcast:
[Guide: Prof. Puru Kulkarni]
Thesis: PDF file [1.2 M].
Slides: PDF [600 K].
Webcasting is becoming an important technology for distance education. Centre
for Distance Engineering Education Program(CDEEP) IIT Bombay, uses this technology
to transmit live courses as well as important academic events. CDEEP is
currently using unicast streaming architecture in which it establishes TCP or UDP
connection per streaming client. This architecture is not scalable as the number of
clients increases load on server as well as on network increases. After the number
of users exceeds server capacity, it leads to packet loss and network delay that lead
to lower perceived quality of video. The same live streaming content is distributed
to all clients. So there is a need of efficient distribution mechanism which should
be a bandwidth efficient and scalable. IP multicast can provide these objectives but
there are some limitations on deployment of IP multicast that we will discuss in this
In this thesis we propose a robust and more scalable solution based on
application level multicast in which packets are replicated in application level instead
of network level. In this thesis we have implemented the basic architecture for live
content distribution. Then we added static as well as dynamic redirection policies to
avoid overloading condition of main server. We evaluated the performance these algorithms
on the emulated testbed and showed that static based redirection works well
to minimize the network load whereas dynamic policies are more useful to balance
the network load on distribution servers. For our emulated setup we have shown that
main server’s network load reduces by 41 % using 2 distribution servers. Also the
load-balancing value ( 0 best and 1.33 worst) for subnet based policy is 0.31 whereas
for round-robin and least-connection based algorithm it is close to zero(0.05).
Applying system dynamics principles to CDEEP system:
[Guide: Prof. Sahana Murthy]
Thesis: PDF file [830 K].
Slides: PPT [1.7 M].
Distance education is being seen as a cost effective alternative of conventional classroom edu-
cation. IIT Bombay had started the Centre for Distance Engineering Education Programme
(CDEEP) for providing high quality distance education in engineering and science to a large
number of students throughout the country. It is providing distance education through live
transmission of IIT courses, through satellite and webcast. This initiative is acting as a helping
hand to improve the degrading state of engineering education in India. This report attempts
to model CDEEP system using System Dynamics, a well known system modeling approach.
It analyzes various feedback loops in the system that cause dynamic changes in the behaviour
of the system. It also makes policy recommendations for refining the structure of the system,
which in turn could improve the behaviour of the system.
[2008-2009 (Sabbatical year)]
Java Applets Animations To Java Midlets Animations Conversion:
Kapil Kadam [External Student];
[Guide: Dr. D. B. Kulkarni, Walchand College of Engineering, Sangli]
The objective of the dissertation is to convert Java Applets Animations to Java
MIDlets Animations that can be displayed on mobile phones. Main goal is to convert
Animations available at Project OSCAR (Open Source Courseware Animation
Repository), IIT Bombay. J2ME (Java 2 Micro Edition) combines a resource-
constrained JVM (Java Virtual Machine) and a set of Java APIs for developing
applications for mobile devices. J2ME does not contain all the classes and
interfaces that J2SE contains, Hence, making it hard for developers to convert Java
applets code to Java MIDlets.
J2ME has two important APIs - The CLDC API
(Connected Limited Device Configuration) and the MIDP API (Mobile Information Device Profile).
The CLDC defines a set of interfaces that facilitate generic
networking leaves the details of implementing the interfaces to the MIDP API.
The bulk of classes and interfaces in the CLDC API are inherited directly from the
J2SE API. J2ME stresses on the point that if there are classes which are present
in the J2SE API and have been used in J2ME, those should not be changed. This
helps in portability between J2SE and J2ME. The javax.microedition.midlet
API package provides a base for developing all mobile applications. It contains the
javax.microedition.midlet.MIDlet class, which is the base class of all J2ME-
based mobile applications (also known as MIDlets) and must be extended by the
main classes of all mobile applications. Quite similar to the java.applet.Applet
class, the MIDlet class provides resources necessary to create midlets.
The conversion methodology consists of three main steps viz. Java to XML conversion,
XML to XML transformation and XML to Java conversion. This also includes the
creation of package for those classes of J2SE which are not supported by MIDlet.
Few examples of applet animations from Project OSCAR, IIT Bombay have been
tested on Sun’s Wireless Toolkit Emulator and positive results have been obtained.
Discovering dependencies in courseware repositories:
Thesis: PDF file [2.1 M].
Slides: PDF [135 K].
Nowadays elearning has become popular, especially with the availability of large courseware
repositories such as NPTEL and OCW. A variety of e-learning tools and systems are also available.
However, suppose a user wants to learn about a particular subject, the search tools typically
just return a large number of links to the user in response to his/her query. Many of these are
not directly relevant, so the user does not know which links to follow in order to enhance his
In this project we have built a system which not only provides the user with the most relevant
learning module for his query, but also provides him/her with the relevant pre-requisite
and follow-up modules also. This is done by creating a dependency graph of the courseware
modules in the repository. We have implemented and tested our system on six courses taken
from the NPTEL repository. For effective evaluation of our solution approach, we have not only
compared the results for four different heuristics but also compared them with the dependencies
determined by an expert in the subject area.
Implementation of WiFiRe MAC: Framing, Memory and Wireless modules:
[co-guide: Prof. Anirudha Sahoo]
Thesis: PDF file [1.3 M].
Slides: PDF [835 K].
Long range wireless for data and voice connectivity is being considered as viable and affordable
solution for rural India for few years now. Numerous solutions were presented to bridge the
digital divide; WiFiRe is one of them. Basic idea is to change CSMA/CA MAC of 802.11 with
more efficient TDMA MAC. WiFiRe promises higher data-rate, longer range and cost-effective
solution as compared to WiFi based solution.
Preliminary implementation was started last year with design phase and we are extending
that work. This year, WiFiRe MAC team at IIT Bombay has implemented most of the WiFiRe
MAC modules mentioned in the WiFiRe draft and they can be integrated with 802.11b PHY
easily. Our MAC implementation is able to support web and voice traffic for multiple subscriber
terminals and their clients. It also supports end-to-end connectivity for all the clients, while they
are unaware of underlying layer-2 protocol.
We have come across numerous implementation issues and design decisions with respect
to WiFiRe MAC testbed. While developing WiFiRe MAC, we got insights into various system
and networking issues with hands-on experience. We have also suggested various extensions
possible to this work in PHY integration and QoS area. Initial results achieved by this project
are encouraging and will lead to full deployment of system soon.
Implementation of WiFiRe MAC: Ranging, Registration and Classification modules:
M Ranjith Kumar
[co-guide: Prof. Anirudha Sahoo]
Thesis: PDF file [1.1 M].
Slides: PDF [600 K].
WiFiRe is an extension of the existing WiFi protocol for providing broadband access for rural
areas. It provides good bandwidth at ow cost by making use of licence free spectrum at 2.4GHz
band and also cost effective chipsets of WiFi (PHY). It mainly replaces the MAC of existing
WiFi (802.11b) with a MAC of WiMAX (802.16). It follows Time Division Duplex-Multi
Sector TDM (TDD-MSTDM) which uses same channel for both UL (up-link) and DL (down-link).
It has star topology, and divides whole area into sectors; each sector has one base station
(BS) with sectorized antenna, each village has one subscriber terminal (ST) with directional
antenna and a System (S) located at fiber Point-of-Presence (PoP), which controls the whole
network. WiFiRe has one single MAC for all sectors (PHY) in the System, which helps to
co-ordinate the medium access.
Design of WiFiRe MAC was started last year and emulation over Ethernet was proposed.
We implemented most of the functionalities for single sector WiFiRe MAC. Implementation
is done using C sockets and it allows user to test basic MAC functions such as connection
establishment, packet flow and header construction, in absence of WiFiRe hardware. Our code
can be easily ported onto hardware. Our implementation supports various kind of applications
like HTTP, FTP, VoIP etc. for Single BS, multiple Subscriber Terminals (ST) and their clients.
Challenges encountered in implementation and their solutions are covered as well.
Implementation of WiFiRe PHY sectorization in OPNET:
Thesis: PDF file [1.0 M].
Slides: PDF [910 K].
WiFiRe - WiFi Rural extension, is an extension of the existing WiFi protocol for providing long
range broadband access for rural areas in cost effective way. As 802.11 MAC is not suitable
for long range communications, WiFiRe replaces the MAC of existing 802.11b with MAC of
802.16 while retaining 802.11b PHY layer to extend the range. WiFiRe uses star topolgy and
divides the region in to sectors, with each sector having one base station with sectorized antenna
and one or more subscriber terminal with directional antenna. WiFiRe system has single MAC
for all sectors, which helps to co-ordinate the medium access using multisector TDM.
In this thesis, we discuss the WiFiRe model design in OPNET. The PHY part of WiFiRe
is implemented with OPNET using pipeline stages which describes the physical behaviour of
wireless medium. The directional antennas in a six sectored system are modelled using OPNET
antenna pattern editor. Some experiments are performed for range of sectorization and number
of VoIP calls supported by ST in six sectored system to validate the model. Results shows the
model is working as expected.
MH-WiFiRe: Multi-hop extension for a WiFiRe system:
[co-guide: Prof. Puru Kulkarni]
Thesis: PDF file [2.2 M].
Slides: PDF [870 K].
Cost effectiveness is an important criterion for any technology/system to
be successful in rural regions. WiFiRe: WiFi Rural Extension based on
802.11 provides low cost broadband Internet access for rural regions and
supports real time traffic like VoIP and Video. WiFiRe assumes star
topology with its cell covering a circular region of 15-20kms in radius.
Due to its single hop nature, it suffers from drawbacks like line of sight
requirement and fixed coverage area. In this project, we extend WiFiRe to
multiple hops and present multi-hop WiFiRe architecture which alleviates
the drawbacks of WiFiRe.
We first present different architectures for multi-hop WiFiRe and compare
them on different dimensions important from the persepective of
feasibility of the system in rural region. We also present detailed cost
and coverage analysis for some of these architectures. On the basis of
this comparative analysis, we select an architecture based on tree
topology for further studies. Our next contribution is in providing
detailed description of the MAC protocol for the multi-hop system. We
consider TDMA based MAC protocol influenced by 2P. We present MAC level
frame structure and further give detailed procedures required to perform
tasks of the MAC protocol. To perform time slots allocation, we propose
three different scheduling schemes. We then perform comparative analysis
for these three scheduling schemes.
The next contribution of our project is a tool to evaluate the performance
of the multi-hop WiFiRe system. We compare performance of our system for
different codecs. We present delay analysis, coverage analysis and
capacity analysis of the system. Through this analysis we show that its
not only feasible to have multi-hop extension but its also possible to
perform trade-off between coverage and number of VoIP calls supported. The
trade-off depends on type of codec and scheduling scheme. The final
contribution of the project is the future work that can be done in the
Shikav extensions to support networking animation:
[co-guide: Prof. Abhiram Ranade]
Thesis: PDF file [900 K].
Slides: PDF [900 K].
There is a growing interest in e-learning tools. One category of e-learning tools is electronic lesson. In addition to the lectures in classroom, electronic lessons can also be used, which help students in understanding the topics more completely by the use of animation as interactive visual aids. For this purpose, many authoring tools have been developed. Shikav 2.0 is one among them.
Shikav is a framework for constructing interactive electronic lessons, including animations using its own high level script language. The framework lets us create lessons in the field of Mathematics, Algorithms, Biology, Physics, and Economics. Shikav is developed in Java and has support for constructs such as point, line, arc, circle, triangle, rectangle, anglecurve, edge, vertex. However these are not sufficient to easily create animations of network protocols. Additional constructs need to be supported such as node, packet, beacon, repeat loops, if else conditional statement.
We have added modules, and made modifications to Shikav to support lessons explaining the basic concept of network protocols.
In this thesis, a detailed description of the concepts, design, features of Shikav are presented. The design and the newly added features to Shikav have also been thoroughly described. We also explain a new defined script called "network script" language.
Design and implementation of MAC layer of WiFiRe protocol:
H Shravan Kumar
[co-guide: Prof. Anirudha Sahoo]
Thesis: PDF file [830 K].
Slides: PDF [300 K].
WiFiRe (Wireless Fidelity Rural Extension) is an extension of the existing WiFi
(802.11) protocol. The main aim of WiFiRe is to provide long range communications with
high bandwidth, with low cost and easy availability of the chipsets. It mainly replaces
the MAC mechanisms of existing WiFi (802.11b) so that it can be used for long range
communication for about 15 – 20km, in contrast to existing technology which can support
only upto a few hundreds of meters. It continues to use the existing Physical layer of WiFi
(802.11b). WiFiRe MAC has many of the features similar to WiMAX (802.16).
WiFiRe provides long range communication by dividing the whole area into sectors,
each sector having one Base Station (BS), which is a sectorized antenna. At each Sub-
scriber Terminal (ST), a directonial antenna is used to connect ST to BS. WiFiRe uses
only one channel for both uplink and downlink, in which each sector is allocated slots
based on Time Division Multiplexing-Multi-sector TDM(TDM-MSTDM) mechanism.
In this report we have designed and implemented the WiFiRe protocol and emulated
in over a LAN. Problems associated with design and implementation and their plausible
solutions are covered as a part of this report. Additionally, it also comprises of sequence
diagrams, flow diagrams and state diagrams of working components of WiFiRe. We have
emulated the protocol using C sockets and this report covers the details of the same. This
report also contains how the slots are being scheduled to STs by BS.
Design and implementation of WiFiRe MAC layer protocol:
[Guide: Prof. Anirudha Sahoo]
Thesis: PDF file [895 K].
Slides: PPT [235 K].
Wireless Fidelity for Rural Extension (WiFiRE) is a MAC layer protocol especially
designed to provide internet broadband facility in rural area in India. It supports long
ranged communication using inherited 802.16d MAC and 802.11b PHY layer. WiFiRE
uses low cost chip sets with capability to communicate on unlicensed spectrum to transmit
The report covers the background knowledge of WiFiRe protocol with
basic working of the protocol and its implementation in C. Problems associated with
design and implementation and their plausible solutions are covered as a part of report.
Additionally, it also comprises of sequence diagrams, flow diagrams, state diagrams etc.
of working components of WiFiRE along with design model in C sockets and describes
the issues and challenges involving implementation of the projects.
Formal specification and verification of WiFiRe protocol:
Ch Sudheer Keshav
[co-guide: Prof. Krishna S]
Thesis: PDF file [750 K].
WiFiRe stand for WiFi-Rural Extension. There has been a lot of growth in
Internet as well as cellular telephony, but it is mostly restricted to only cities in developing
countries like India. A number of alternative solutions have been proposed which in
theory are suitable for rural areas, but because of their high cost WiFiRe is the best
option. WiFiRe is a mixture of the advantages of both 802.11 and 802.16. The cost
affordability of 802.11 and QoS capabilities of 802.16 are merged to obtain WiFiRe. The
WiFiRe MAC is time-division duplex (TDD) over a single 802.11b channel along with a
multi-sector TDM mechanism. WiFiRe will be able to provide long range communication
by dividing the whole area into sectors, each sector will be having one base station which
is directional so that it covers more distance.
Formal methods may be used to give a description of the system to be developed, at
whatever level(s) of detail desired. This formal description can be used to guide further
development activities; additionally, it can be used to verify that the requirements for the
system being developed have been completely and accurately specified. Formal methods
are mathematically-based techniques for the specification, development and verification
of software and hardware systems. Formal methods and testing are a perfect couple,
testing and formal verification are both necessary. A formally verified specification is a
good starting point for testing. So a formal specification and verification of WiFiRe is
Performance evaluation of WiFiRe using OPNET:
Venkat Reddy M
[co-guide: Prof. Varsha Apte]
Thesis: PDF file [760 K].
Slides: PDF [700 K].
The goal of WiFiRe is to provide broadband Internet services in the rural areas. The key idea in WiFiRe is to replace the 802.11b MAC mechanisms, with something more suitable for long-range communication, while using 802.11b PHY. WiFiRe network topology is a star topology, a System (S) with set of Base Stations (BS) with sectorized antennas at the fiber Point of Presence (PoP) and Subscriber Terminals in the surrounding villages with directional antennas. WiFiRe Cell is divided into sectors and each sector will be covered by a BS . WiFiRe has one common medium access (MAC) controller for all the sectors, to co-ordinate the medium access among them. There are multiple STs in each sector. The multiple access mechanism is time division duplexed, multi-sector TDM (TDD-MSTDM) scheduling of slots.
This protocol is modeled in OPNET. In this model we have implemented a Round Robin slot scheduling algorithm and a Connection Admission Control mechanism. We have simulated this protocol in different scenarios with different MAC configurations. We have simulated UGS, rtPS, nrtPS and BE service classes with different configurations, like changing polling time, changing slot length, etc. We have compared throughput and service delays of different service classes in different scenarios. By simulation results, we have determined the best slot length and maximum number of users supported by each BS for different type of service classes.
Design of PSTN-VoIP gateway with inbuilt PBX & SIP extensions for wireless medium:
Thesis: PDF file [770 K].
Slides: PDF [630 K].
VoIP gateway enables voice communication between users of the IP network and the
Public Switched Telephone Network(PSTN). The system setup requires a PC installed
with Asterisk, and a Gateway to integrate with PSTN. The problem with this solution is
the high cost, power consumption, and the involved setup of the system.
We have designed a single box solution for the PSTN-VoIP integration system. We
studied detailed architecture of the gateway, the protocols used in the VoIP call setup
and communication, the software used for the PBX systems, and the internal parts of
the SPA3000 gateway. We used the Via motherboard, flash memory, and a normal data
modem to create a improved and cost-effective system.
Next, we performed various studies on the Asterisk’s response time in wired and
wireless medium. We found a remarkable difference in the response time of Asterisk for
both the mediums. The reason for the high response time of Asterisk in wireless medium is
the slow call setup. SIP being a text-based protocol, is engineered for high data rate links,
and so SIP message’s size have not been optimized. With low bit rate IP connectivity
of signaling channel, the transmission delays for call setup and feature invocation are
We have extended the Session Initiation Protocol for improving its efficiency in wire-
less medium. We have implemented the compression and decompression mechanisms
according to the SigComp standard, and integrated them to the Asterisk server. We have
also developed a SigComp enabled client to run with Asterisk. We performed extensive
testing of the system and obtained upto 90% compression using SigComp and Deflate
Design of PSTN-VoIP gateway for rural environments:
K Sravana Kumar
Thesis: PDF file [1.2 M].
Slides: PDF [1.3 M].
A Voice over IP(VoIP) gateway is one of the various components used in realizing the
convergence of data and telephone networks. It enables voice communication between
callers on the IP network and the Public Switched Telephone Network(PSTN). There
are many vendors that provide VoIP-PSTN gateways although most of them are very
expensive. Soft PBX is a software-based PBX solution using VoIP technology. Asterisk
is an example of a Soft PBX. If we want to maintain a VoIP-based telephone exchange,
the cost of the entire system proves to be very expensive and it is not suitable for rural
deployment. Also most of the devices require AC power which is not easily available in
rural areas. There is currently no device in the market which provides the functionalities
of both gateway and PBX and yet operates on DC power. Our aim is to design such a
box which performs as a VoIP gateway with inbuilt Asterisk PBX and at the same time
consume less power and be relatively inexpensive.
Asterisk has its own authentication for SIP users. Generally, large organizations and
universities maintain some external authentication for their employees/students such as
LDAP. Many of these organizations provide VoIP telephony facilities to their people. So,
for unique authentication of users, there is a need to provide an external authentication
mechanism in Asterisk, which can interface with the authentication system in use at the
organization. So, one more aim of this project is to integrate Asterisk with LDAP.
HSM: A hybrid streaming mechanism for delay-tolerant multimedia applications:
Thesis: PDF file [928 K].
Slides: PPT [1.5 M].
Streaming is a technique for transferring data such that it can be processed as a steady and continuous stream. In a content dissemination network, typically a central server at the source streams the content in response to client requests. We term it as Pure Streaming Mechanism (PSM). Considering that in a dissemination network controlled by a Content Service Provider (CSP), the backbone links are highly provisioned, using a streaming server at the source leads to underutilization of these links. Also the links are occupied for the duration of play out of the multimedia content. This is because according to the streaming property, the streaming server sends only the amount of data equivalent to the streaming encoded rate to the client irrespective of the available bandwidth in the path. Delay-tolerant applications are a special class of on demand multimedia applications where clients request the start of play back at a time specified by (t +dti) where t is the current time and dti is the delay tolerance acceptable to client i.
In this thesis report, we present a novel Hybrid Streaming Mechanism (HSM) for Delay-Tolerant Multimedia Applications to enhance the following performance parameters: (i) number of serviced clients, and (ii) delivered stream rate at clients. HSM allows streaming from strategically chosen intermediate nodes to which the content is dynamically transferred from the source, using FTP (File Transfer Protocol). As FTP uses the entire bandwidth to transfer the data, it frees up the high bandwidth links faster for serving requests from other clients sharing these links, increasing the efficiency of the service. In HSM, transferred contents are temporally cached at the intermediate node (Streaming Point). Such temporary caching further enhances the performances of HSM as requests for the same content are serviced from the cache.
Video transmission over varying bandwidth links:
Thesis: PDF file [480 K].
Slides: PPT [415 K].
Internet today is a best-effort network, with time-varying bandwith characteristics.
A system for multicasting video over the Internet has to deal with heterogeneity of the
receiver’s capability and/or requirements. So adaptive mechanisms are necessary. Con-
ventionally multi-layered transmission of data is preferred to solve the problem of varying
bandwidth in multimedia multicast application. In todays scenario majority of the flows
are highly bandwidth consuming and are bursty in nature. Consequently we observe sudden changes in available bandwidth,
leading to lot of variations in received video quality.
Here our aim is to minimize these variations in video quality. In this thesis we propose
traffic pattern based adaptive multimedia multicast (TPAMM) scheme.In this scheme,
link bandwidth values are available for prediction window length. Bandwidth prediction
model is refined periodically using feedback from receiver. TPAMM scheme also maxi-
mizes the minimum quality video playout.
Application partitioning - A dynamic, runtime, object-level approach:
Thesis: PDF file [1.2 M].
With the advent of increase in the computational complexity of the programs, it seems
conducive to distribute a centralized program written for a standalone system onto a
network of nodes. Application partitioning is one such technique, that aims at division
(allocation) of application components amongst different hosts in the network, thereby
getting a standalone application executed in a distributed setting.
Most of the existing work in application partitioning, uses classes as the basic component for partitioning.
However, the behavior of an application is determined by the
interaction of its entities at run-time. In an object-oriented program, the run-time entities are principally objects.
We believe that, a more effective and relevant partitioning
mechanism can be created by considering objects as the basic components for distribution.
We also believe that, a truly flexible partitioning system should as far as possible,
postpone the decision regarding the placement of each component to run-time. Any
such decision should try to optimize on both - the number of remote invocations that
a distribution strategy would result in, as well as the load distribution on the available
We thus propose a working architecture that exploits the dependency relationships
between the components, to build a model prior to the program execution. At run-time,
the model progressively incorporates the information about already allocated components
and helps in deciding the position of new component.
Proxy-AODV: Extension of AODV for partially connected Adhoc networks:
Thesis: PDF file [520 K].
Slides: PPT [500 K].
Ad hoc on-demand Distance Vector (AODV) is a routing scheme for delivering messages
in a connected Mobile Ad hoc Network (MANET). In MANETs, a set of nodes are
used to route the data on behalf of other nodes. This scheme relies on the assumption
that nodes are distributed over the entire region and there exists connectivity between
any source-destination pair in the network at all times. This scheme fail when the network
is partially connected, i.e. when there is no single-hop or multi-hop path from source to
The existing schemes for routing in partially connected ad hoc networks make assumptions like,
source and destination never have a connected path, a set of mobile nodes
with fixed route deliver the data, large storage space at nodes, etc. All these assumptions
may not hold true in a resource constrained ad hoc network. We propose an extension
to AODV, named proxy-AODV to remove the above assumptions. In situations where
the network is connected our protocol behaves like normal AODV. When there is no connected path,
we exploit mobility of nodes and use “store and forward” approach to deliver
In proxy-AODV when source and destination are not connected. Then some of the
nodes called proxy nodes are selected by source to hold the data on behalf of destination.
Proxy nodes acts as source and try to deliver data to the destination.
Using extensive simulations in QualNet simulator, we show that our protocol provides
good message delivery ratio while keeping the buffer occupancy at nodes under check.
Implementation of WiFiRe model in OPNET:
Thesis: PDF file [630 K].
The 802.11(WiFi) family of wireless communication is extremely popular for indoor
wireless networking. The chipsets are designed so that PHY and MAC layers are separate
and they are available off-the-shelf at low cost. Also it operates in 2.5GHz ISM band
which is license free spectrum avoiding licensing cost, hence is an attractive option for
long range communication in rural areas of developing countries.
However many problem occur when using 802.11 for outdoor long range (15-20 Km)
communication. The DCF mode does not support any quality of service, while PCF is not
suitable for large number of stations and long distance. The MAC is based on CSMA/CA
and is not efficient when the number of stations increases. Hence it is necessary to redesign
MAC while retaining the PHY. One approach for this is WiFiRe.
WiFiRe replaces the MAC of 802.11 with the MAC similar to that of 802.16 MAC
which is suitable for long range communication. It also uses directional antennas and is
meant for a star topology. WiFiRe MAC at the Base Station is multi-sector MAC which
controls more than one 802.11 directional PHY.
The goal of this project is to implement the WiFiRe model in OPNET and analyse its
performance. The code developed is to be such that, it can be easily extended to a real
RFIDplanner: A coverage planning tool for RFID networks:
Thesis: PS file [16 M]. PDF file [1.6 M].
Slides: PPT [11 M].
Radio Frequency Identification or RFID systems are emerging as economical solutions for fast identification of
objects. One of the important aspects of setting up a RFID network is deploying RFID readers to ensure
complete coverage of RFID tags in a given area. This task is usually accomplished by conducting site surveys
and then deploying RFID readers in different configurations to determine the optimal one. As the size of the
given area increases, time taken to setup a RFID network increases thus increasing the cost of deployment.
We present RFIDPlanner, a coverage planning tool for RFID networks, which attempts to tackle the issue using
simulation. Given the details of an area, the properties of obstacles present in the area, and the
specifications of the RFID reader and tags used, RFIDPlanner simulates the coverage pattern of a reader giving
a graphical view of the RFID reader's interrogation range. Through its interactive GUI the layout can be
modified during run-time, by moving and deleting existing RFID readers and adding new ones. The zoom feature
allows different levels of visualizations ranging from a view of the entire layout to focused views of
particular sections and RFID readers.
Improving RFID systems to read tags efficiently:
[Guide: Prof. Anirudha Sahoo]
Thesis: PDF file [6.3 M].
Radio Frequency IDentification (RFID) is slated to become a standard for tagging various products.
As more and more products become RFID enabled, fast tag identification
mechanisms will become important. Various tag identification (or anti-collision) algorithms have been proposed for RFID systems.
This work focuses on methods to improve tag read efficiency in RFID Systems.
We propose an Intelligent Query Tree (IQT) Protocol for tag identification that exploits specific prefix
patterns in the tags and makes the identification process more efficient. IQT is a memoryless protocol that identifies
RFID tags more efficiently in scenarios where tag IDs have some common prefix (e.g.,
common vendor ID or product ID). IQT is most suitable for readers deployed in exclusive
showrooms, and shipment points of big malls, where the products may come from same
manufacturers and may have same product type. We provide the worst case complexity
analysis of IQT and show the performance improvement of this protocol over traditional
Query Tree protocol in different scenarios. For other cases we show the improvement
using simulation results.
PLUS-DAC: An admission control scheme for IEEE 802.11 Wireless LANs:
Kiran Kumar Gavini
[Guide: Prof. Varsha Apte]
Thesis: PDF file [460 K].
Slides: PDF [640 K].
With increasing demand of support for realtime applications, there is a compelling need for
Quality of Service (QoS) in present day wireless LANs. IEEE 802.11 Medium Access Control (MAC)
has become a defacto standard for wireless LANs, but there are many inherent QoS limitations
in the base standard, as it was basically developed for best effort data services.
The 802.11 Task Group E (TGe), is about to ratify a QoS extension to the base 802.11 standard
namely 802.11e. The 802.11e standard provides many mechanisms for QoS support at
the MAC layer level. However, even the service differentiation provided in 802.11e is not
enough to meet the QoS requirements of time bounded multimedia traffic at high load.
These can be better satisfied, if we employ Admission Control and Bandwidth Reservation mechanisms.
Another important concern in WLANs is channel utilization. Generally, partitioning based
reservation schemes do static division of bandwidth.
When bandwidth is divided statically, often, more bandwidth can get
allocated to a category which is currently not offering much traffic
to the network, resulting in under-utilization of the bandwidth resources.
More over, the bandwidth partitioning should not be purely based on the priority of the traffic.
We propose a measurement based distributed admission control mechanism,
for the 802.11e Wireless Local Area Network (WLAN) functioning in infrastructure mode. We call
the scheme PLUS-DAC (Priority, Load and Utilization based Scheme for Distributed
Admission Control). PLUS-DAC measures the load and utilization in the BSS and adapts the
Transmission Opportunity (TXOP) reservation dynamically. Our results show that, PLUS-DAC
can achieve quasi-optimal utilization and continue to satisfy QoS guarantees given to
An Efficient QoS Scheduling Architecture for IEEE 802.16 Wireless MANs:
[co-guide: Prof. Krishna Paul]
Thesis: PDF file [590 K]. PS file [2128 K].
Slides: PPT [1116 K].
IEEE 802.16 WirelessMAN standard specifies air interface
of a fixed point-to-multipoint Broadband Wireless Access.
IEEE 802.16 MAC provides extensive bandwidth allocation
and QoS mechanisms for various types of applications.
However, details of packet scheduling mechanisms for
both downlink and uplink direction are left unspecified in the standard.
We propose an efficient QoS scheduling architecture for IEEE 802.16 WirelessMANs. Our main
design goals are to provide delay and bandwidth guarantees
for various kinds of applications and to maintain fairness
among various flows while still achieving high bandwidth utilization.
Our architecture supports diverse QoS requirements of all
four kinds of service flows specified in IEEE 802.16 standard.
We have implemented IEEE 802.16 MAC integrated with our
architecture in Qualnet 3.6 network simulator. We also present the
simulation analysis of our architecture. We have shown, through
simulation, that our architecture is capable of achieving high
bandwidth utilization. Simulation results also show that sufficient
bandwidth is allocated to high priority flows so that their QoS
guarantees are always met.
Mitigating the Reader Collision Problem in RFID Networks:
Thesis: PDF file [1204 K].
Slides: PPT [1756 K].
Radio Frequency Identification (RFID) is a means to identify and track objects using radio frequency transmission.
An RFID system consists of readers and tags. Readers use radio signals to communicate with the tags.
Tags may be active (battery powered) or passive (powered by the reader's signals).
RFID is increasingly being used in many applications such as inventory management, object tracking, retail checkout etc.
The reader collision problem occurs when the signal from one reader interferes with the signal from other readers.
Such interference can result in lack of communication between the readers and some of the tags in the vicinity leading to
incorrect and inefficient operation of an RFID system. This problem is further aggravated when mobile/hand-held readers
are used in the system. Hence efforts are required to minimize this interference.
We describe Pulse, a distributed protocol to reduce these reader collisions in the RFID systems.
The operation of the Pulse protocol is based on periodic beaconing on a separate control channel by the reader,
while it is reading the tags. The protocol functions effectively not only with fixed RFID readers but also with
mobile RFID readers. We show, using simulation in QualNet, that using Pulse protocol, the throughput
(overall read rate) is increased by 98%(with 49 readers) as compared to ``Listen Before Talk''(CSMA)
and by 337% as compared to Colorwave (with 9 readers). We also present an analytical model for
our protocol in a single hop scenario.
RFIDcover: A coverage planning tool for RFID networks with mobile readers:
Thesis: PDF file [1444 K].
Slides: PPT [1227 K].
Demo: ZIP [80 K].
Radio Frequency Identification (RFID) finds use in numerous applications
involving item identification and tracking. In a typical application, RFID
tags are attached to the items and are periodically queried by readers.
These applications require that a sufficient number of readers be deployed
in order to provide complete coverage of all the tags in the given area.
Using a fixed placement of readers to guarantee complete coverage at all
times increases the deployment costs. Also, most practical applications do
not need complete coverage at all times. It is enough to provide complete
coverage periodically, say each tag being covered every T seconds. For
such applications, using mobile readers to cover the area would be more
Given an area to be covered completely within a period T, determining the
number of mobile readers required, their placement and movement pattern,
is a difficult problem. We have designed and developed RFIDcover, an
automated coverage planning tool, that addresses this problem. Given an
application scenario and reader specifications, RFIDcover determines an
optimal number of readers required to guarantee complete coverage within
the specified period T. It also generates a layout giving the placement
and movement pattern of the readers. In this thesis report, we present
RFIDcover and its implementation for a retail inventory tracking
application scenario and evaluate its effectiveness.
M2MC: Middleware for Many to Many Communications over Broadcast Networks:
Chaitanya Krishna B.
Thesis: PDF file [2272 K].
Slides: PPT [544 K].
M2MC is a new distributed computing middleware designed to support collaborative applications running on devices
connected by broadcast networks. Examples of such networks are wireless ad hoc networks of mobile computing devices,
or wired devices connected by a local area network. M2MC is useful for building a broad range of
multi$-$user applications like multiplayer games, conversations, group ware systems.
M2MC architecture consists of Messages Ordering protocol, Member Synchronization protocol, and protocols for
processes to join and leave the groups. We emphasized on Message Ordering protocol as it is a key
component in developing group communication applications. Hence we proposed a new message ordering called
$S_b$ ordering that orders the messages based on their semantic relationship as specified by the users.
Some salient features of M2MC are: Unlike existing middleware architectures that rely on central servers,
the M2MC is truly distributed protocol and hence application developed using M2MC does not require central servers.
Being broadcast oriented, M2MC does not require any resource consuming routing protocols.
Distributed applications development is simplified by M2MC APIs.
QoS Guarantees for Real-Time Applications in 802.11 WLANs:
Thesis: PDF file [815 K].
Slides: PPT [415 K].
IEEE 802.11 standard specifies the MAC and PHY layers of the OSI Seven Layer Model for
Wireless LANs. 802.11 is by far the most widely used and popular of the suite of WLANs.
Channel access is both distributed as well as centralized called DCF and PCF respectively.
The expansion of IEEE 802.11 based WLANs has created interest in providing
Quality of Service (QoS) guarantees in such networks. 802.11e suggests a
flow-based solution based on priority queues at the node, for providing QoS
guarantees. We propose a different approach, using a variant of TDMA, called
Dynamic Time-division Multiple Access (DTMA).
It is based on the observation that the ratio of data transmission time to
control packet (poll or acknowledgement) transmission time is typically 6:1,
and drops down almost to 2:1 when the data packet size
becomes smaller than 500 bytes. DTMA focusses on reducing control information
and thereby increasing time available for data transmission. More time
available for data transmission implies increased throughput.
We expunge the overhead of control packets in the HCF by using
cummulative acknowledgements and piggybacking.Taking advantage of 802.11's
inherent limited range, DTMA, the modified TDMA is used for data transfer in
HCF, without encountering the problem of distributed time synchronization. In
this dissertation, we do the analysis
of 802.11e and DTMA using probability models and also simulations to support
these models. Thus, aided by simulations and analytical methods, we prove that
DTMA has stricter and lesser delay bounds than 802.11e, for real-time and QoS
sensitive applications. We also show that DTMA enhances the overall throughput
by almost 20\%, thereby implying better QoS guarantees in the 802.11 domain.
Lookup service for peer-to-peer systems in mobile adhoc networks:
[co-guide: Prof. Krishna Paul]
Thesis: PDF file [448 K]. PS file [572 K].
Slides: PPT [477 K].
A Peer-to-Peer(P2P) system consists of a set of nodes in which every node shares its own
resources and services among other nodes in the system. Any node can make query for data,
currently available in the network. These peer-to-peer systems are basically overlay networks,
work on top of fixed network infrastructure. With the increasing popularity of Mobile Ad-hoc Networks (MANET),
it becomes necessary to think of P2P systems which can efficiently work in mobile environments.
Current P2P systems do not fit well into mobile environments where each node have to follow multi-hop
path routing and have to face unpredictable mobility. If current P2P systems are deployed on the MANET,
application layer and network layer both perform different routing mechanism independent of each other,
resulting in multiple layer routing redundancy. Also, routes at the application layer may not be necessarily
optimal at the network layer. Thus, we bring down all routing and query lookup mechanism to network layer.
Further, current ad-hoc network protocols do not fulfill the P2P requirements as basically they all are flooding
based protocols. Here, we address this problem and propose a novel, controlled-flooding based approach which works
at network layer and helps P2P systems to work in mobile environment efficiently.
RINGS is a protocol which implements this approach.RINGS performs query lookup at network layer
rather than at application layer, thus eliminating application layer routing overhead. It also reduces
query lookup cost by evenly distributing data indices throughout the network, thus reducing network layer routing overhead.
Performance Comparison of DAMA MAC Schemes over Satellite Networks:
Kalyan Rao D.
Thesis: PDF file [943 K].
Slides: PDF [141 K].
Satellite networks provides wide coverage of geographic area and high bandwidth.
As satellite capacity is often limiting resource which must be used efficiently.
Internet traffic is highly bursty in nature and Demand Assignment Multiple Access (DAMA) techniques
are suitable provide bandwidth to match instantaneous requirements and provide significant improvements in the
delay/utilization performance of Geo-stationary Earth Orbit (GEO) satellite channels supporting a
finite number of users with bursty data traffic. So, the performance of various DAMA MAC schemes is important.
In this thesis, we investigate the performance of CFDAMA and BTDAMA protocols on satellite networks and
proposes an extention to BTDAMA, User Prioritized-BTDAMA (BTDAMA-UP) which implements Quality of Service
(QoS) by user prioritization so that different group of users will get different level of service.
MSIP: A protocol for efficient handoffs of real-time multimedia in mobile wireless scenarios:
Ranjeeth Kumar A.
Thesis: PDF file [367 K]. PS file [2296 K].
The support for IP mobility has become very important with the increasing growth of mobile applications.
One of the key challenges for the deployment of such wireless Internet infrastructure is to efficiently
manage user mobility. Mobility is handled by Mobile IP at the network layer and Session Initiation
Protocol (SIP) at the application layer.
The main aim of these two schemes is to provide seamless connectivity for ongoing communications, that is,
to keep the handoff latency delay as less as possible. The delay constraint becomes even more crucial
for real-time applications. The Mobile IP scheme has drawbacks like triangular routing, longer delays and
the need for tunneling management. SIP solves many of the problems in Mobile IP, but it involves the
time consuming process of obtaining a new IP address from the DHCP server.
In this report, we propose a hybrid scheme, MSIP, that is the integration of Mobile IP and SIP.
MSIP reduces the handoff delay for real-time applications. It avoids the need for tunneling the
packets throughout the handoff period and the communication need not be suspended till a new IP address
is allocated by the DHCP server. Thus MSIP gives better performance than Mobile IP and SIP, for
Design and deployment of file transfer protocol over asymmetric satellite networks:
Thesis: PDF file [1479 K].
Slides: PPT [1.5 M].
Satellite networks are the most widely deployed networks for audio and video transmissions.
Though the initial installation cost is more, the benefit of reaching users spread over very large areas
makes it a viable and very effective option for sending information in which some loss is tolerable.
Some of the main drawbacks of satellite networks are limited bandwidth availability, large delay involved in transmission
and their asymmetric nature. This leads to still higher delay in correcting the losses occurred during
transmission making them difficult to use for reliable transmission.
But when the need for sending data reliably to multiple users spread across a wide region arises,
the long reach and inherent multicast nature of satellite networks make them an interesting option.
Our work explores the possibility of using asymmetric networks like satellite networks for
reliable data transmission to multiple users. An asymmetric network consists of a comparatively
large bandwidth unreliable communication channel coupled with a low bandwidth reliable communication channel.
We have designed a protocol which can be used to transmit data reliably over above defined asymmetric networks.
We have implemented the protocol in developing a file transfer utility for Distance Education Program, IIT Bombay
which uses a satellite network of the above kind. The algorithm is general enough to be
employed for reliably transferring data over any asymmetric network though the change in
network parameters will affect the protocol efficiency.
Dynamic adaptation of the DCF and PCF mode in IEEE 802.11 WLANs:
Thesis: PDF file [671 K].
Slides: PPT [1.6 M].
IEEE 802.11 specifies the most famous family of WLANs. It features two basic mode of operation:
Distributed Coordinating Function (DCF) and Point Coordinating Function (PCF).
Both PCF and DCF mode of IEEE 802.11 do not perform equally well under all traffic scenarios.
Their behavior varies depending upon current network size and traffic load. It is useful to use the
DCF mode for low traffic and small network size, and the PCF mode for high traffic loads and
to reduce contention in large size network.
In this thesis, we have designed three protocols to dynamically adapt IEEE 802.11 MAC under varying load.
One of them is designed to dynamically switch between either mode. Our Dynamic Switching Protocol (DSP)
observes network traffic to decide switching point and switches dynamically to suit current traffic load and
PRRS is our second contribution that aims to reduce polling overheads. A major drawback of
polling scheme in PCF, is their inefficiency when only a small number of nodes have data to send.
Unsuccessful polling attempts cause unnecessary delays for station with data. We have presented network
monitoring based scheme that replaces simple Round Robin scheduling in PCF with our
Priority Round Robin Scheduling (PRRS). Result shows considerable increase in throughput especially
when small fraction of node has data to transmit.
In addition, we have presented the need to dynamically adapt various configuration parameters
in both PCF and DCF. Statically configured values results in degraded performance under
varying scenarios .We have showed the performance variation of PCF with PRRS by using
different CFP repetition intervals. Our proposed CFP repetition interval adaption algorithm
dynamically adjusts the value of CFP repetition interval, depending upon last CFP usage.
Design and evaluation of IEEE 802.11 based Dual MAC for mobile adhoc networks:
Thesis: PDF file [350 K].
Slides: PDF [77 K].
Multihop ad-hoc wireless networks offer great challenges for protocol
designers. Stations in such networks are constrained by factors like low
power, limited bandwidth, link errors, and collisions. Changes are needed
at various levels of the protocol stack, most importantly at the medium
access layer (MAC). The medium access mechanism in multihop wireless
networks should minimize collisions, and take care of the hidden and
exposed node problems. The IEEE 802.11 MAC with Distributed Coordination
Function (DCF) does not scale well in such networks. We introduce Point
Coordination Function (PCF) in the region of high traffic areas, and
discuss its effect on network performance.
To improve network scalability and throughput, we propose the design of a new MAC called Dual MAC.
This work discusses architecture and working of the dual MAC in detail.
Performance results of the network using dual MAC are presented, and
compared with that of pure DCF operation.
Tackling the exposed node problem in IEEE 802.11 MAC:
Thesis: PDF file [1456 K]. PS file [1611 K].
Slides: PPT [338 K].
Ad hoc wireless networks promise convenient infrastructure-free communication.
An efficient MAC protocol through which mobile stations can share a common broadcast channel is
essential in an ad-hoc network because the medium or channel is a scarce resource.
The basic medium access mechanism is basically a Carrier Sense Multiple Access with
Collision Avoidance mechanism (CSMA/CA). The CSMA/CA protocol is designed to reduce the collision
probability between multiple stations accessing a medium.
802.11 MAC forestalls the possibility of feasible parallel communication by two neighboring nodes
that are either both senders or both receivers giving rise to exposed Node problem.
Exposed Nodes are the nodes which are forced to defer the transmission of data as it is already
exposed to an ongoing tranmission. This work adds enhancements to IEEE 802.11 MAC which
enables it to schedule concurrent transmissions, thereby improving the channel utilization
and solving the Exposed Node problem
Video streaming in mobile environments:
Manoj Kumar C.
Thesis: PDF file [481 K]. PS file [559 K].
Slides: PDF [225 K].
Video Streaming refers to the real-time transmission of stored video.
In order to control network congestion, rate control schemes adjust the
output rate of the video stream to the estimated available bandwidth in the network.
Loss based rate control schemes like AIMD, TFRC employ the packet loss
rate reported by the receiver as the principal feedback parameter to estimate the
state of the network. In a heterogeneous network, where the streaming server is
located on the wired network and the client/receiver is located on the wireless network,
due to the high-burst error rates of the wireless channels, the loss rate reported by
the receiver may not be a correct indicator of congestion in the network.
Especially during bad wireless channel conditions, when the channel error rate is
high, the loss rate reported will be high. Hence, loss based rate control
schemes may inaccurately estimate the state of the network and respond by
decreasing the output rate which will affect the quality of video received
by the clients.
Two schemes have been proposed to alleviate the above problem.
In ROLC Report-congestion-losses scheme, the receiver is made to report only
fraction of the packets lost due to congestion. Due to this, the errors in the
wireless channel cannot affect the rate control and quality of video
as these errors are not reported to the sender.
In Report-correlation-of-loss-and-delay, besides loss rate, the correlation
of the packet loss and delay during a feedback interval is reported.
This parameter is used by the sender in addition to the loss rate
to respond to only congestion.
Route repair in mobile adhoc networks:
Thesis: PDF file [345 K]. PS file [998 K].
Slides: PDF [343 K].
Mobile Adhoc Networks (MANET) are distributed, mobile, wireless, multihop networks that operate
without pre-existing communication infrastructure, except for the mobile devices themselves.
Several routing protocols both reactive and pro-active have been proposed to provide
the self starting behavior needed for adhoc networks.
The nature of the network coupled with the mobility of the devices, result in a large number of
route breakages. The current approach in case of broken routes is to flag an error and
re-initiate route discovery either at the source or at the intermediate node. Repairing
these broken links is a costly affair in terms of routing overhead and delay involved.
In this report, we propose a proactive approach called Routing Handoff, to repair broken routes,
using the mobile devices in the vicinity of the broken link.
The idea is incorporated into the AODV routing protocol. The results of the simulation indicate
an increase in throughput under certain conditions. The improvement is a result of smaller
overhead and delay.
The approach may also be applied to other routing protocols with appropriate modification.
Security issues in mobile agents:
Thesis: PDF file [233 K].
Slides: PDF [100 K].
An autonomous mobile agent is an executing program that can migrate
from machine to machine in a heterogeneous network under its own
control. An agent can either follow a pre-assigned path on the
network or determine its itinerary based on the data collected from
the network. Facilities for highly dynamic movement of code and data
enables a program to take advantage of the locality of data. It also
allows one to optimize between the requirements of low bandwidth, high
latency and disconnected network connections.
This computing paradigm which exploits code, data and state mobility
raises many new security issues, which are quite different from
conventional client/server systems. Agent servers which provide an
execution environment for the agents to execute can be attacked by
malicious agents. Similarly agents could be carrying sensitive
information about their owners and should be protected from tampering
by malicious hosts. Also, the data collected by the agent from one
host should be protected from tampering by another host in the
In this report, we examine the various security issues that arise in
mobile agents in general with special reference to data collection
agents. We propose an algorithm to identify the malicious host
modifying the data in data collection agents. Multiple hosts can
collude to remove the data collected by the agent from previous
hosts. We give a probabilistic collusion detection algorithm to
detect deletion of data by colluding malicious hosts.
ATCP: Adapted TCP for mobile wireless environments:
Ajay Kumar Singh
Thesis: PDF file [284 K]. PS file [357 K].
Slides: PDF [190 K].
Transmission Control Protocol (TCP) is tuned to perform well in wired networks,
but suffers from performance degradation over mobile wireless networks.
This is due to distinct characteristics of these networks that packet losses
also occur by high bit error rate and intermittent disconnection induced by mobility,
and TCP misinterprets these losses as a indication of congestion.
Several approaches have been put forth by research community for alleviating the
detrimental effect of poor characteristics of wireless channel, but only few approaches
are presented for solving the mobility related issues. Even in these attempts,
mostly requires support from base-station like some state per TCP connection.
This requirement of these approach make them undesirable as these schemes
introduced scalability difficulty, hinder the inter-operability of the mobile host
with different networks, and also could not support encrypted IP traffic,
different acknowledgement path.
All these observations motivated us to design of a new approach called ATCP.
ATCP falls into the category of those approaches which mitigates the degrading effect
of mobility while requiring modification only at mobile host like 3DA, Freeze TCP approaches.
We also noticed that the authors of 3DA and Freeze TCP approach have not given any
specific action to be taken when mobile host is sender. ATCP is designed for enhancing
the performance of both direction data transfer i.e. mobile host (MH) to fixed host (FH)
and MH to FH. ATCP scheme modified the TCP at mobile host end and requires feedback
about the ongoing mobility from network layer in terms of connection and disconnection signal.
ATCP has been compared with 3DA, Freeze TCP and TCP Reno over network simulator ns-2.
It improves the performance of TCP data transfer in both direction i.e. FH to MH and MH to FH.
For data transfer from FH to MH, ATCP is shown to reduce response time as 3DA approach does,
but it also achieves higher TCP throughput in most cases. It has been shown that for WLAN environments
ATCP and Freeze TCP performs equally well, while in WWAN environments, ATCP is achieving same
throughput as Freeze TCP for short disconnection period. ATCP has a significant advantage over
Freeze TCP that it does not require prediction of impending disconnections.
Another drawback of Freeze TCP shown that the throughput over the connection is sensitive to
the prediction period. Simulations show that an improvement of up to 40\% is achieved in
WLAN environments, and up to 150\% in WWAN environments.
Distributed intrusion detection systems:
Thesis: PDF file [420 K]. PS file [951 K].
Slides: PPT [291 K].
As more and more data goes online, there is a pressing need to secure the
dissemination of a large amount of information. Because of the effort
required to monitor networks and systems manually, it is not easy to
detect attempts at misuse or successful attacks without the help of
intelligent Intrusion Detection Systems (IDS).
IDS, much like the security industry, has grown rapidly over the past
few years. These tools have become essential security components - as
valuable to many organizations as a firewall. However, as in any
environment, things change. Networks and crackers are evolving fast,
demanding that security tools keep up. Intrusion Detection Systems face
several daunting, but exciting challenges in the future and are sure to
remain one of our best weapons in the arena of network security.
The modern day Network IDS faces some very challenging problems, like
switched environments, increased network traffic, and encryption. Add to
that, the performance considerations of an IDS, such as false positives
and missed attacks, and the mole hill does become a mountain! The way to
go seems to be analysis and data correlation, in which, host IDSs also
play an important role. The concept of a management console dedicated to
the task of correlating abnormal event notifications, with relevance
measures is an emerging one. One can picturize many distributed elements
performing specific jobs, each passing the results onto a higher level
for correlation and analysis.
In an environment where many machines have similar configurations, a
complete portscan on one machine may trigger alarms but slow scans
across ports of different machines might go unnoticed and will result in
the intruder gaining all the information about the services running on
each machine, thus successfully performing a "distributed portscan".
We focus on detecting a "distributed portscan", by sniffing packets
on the network. Five types of TCP portscans, performed by "nmap" are
successfully detected, in scan sweeps of one-to-one, one-to-many,
many-to-one and many-to-many hosts. Our approach also manages to detect
slow scans which are typically missed by available commercial packages,
because of the features that we select to examine.
Extending the ns2 simulator for GPRS support:
Thesis: PDF file [742 K]. PS file [1668 K].
Slides: PPT [220 K].
Enabling wireless Internet access at data rates comparable to wired
networks, is a growing concern in recent years. One attempt at this,
is the General Packet Radio Service (GPRS), a packet based extension to
GSM. The packet switched radio transmission enables efficient
multiplexing of radio resources and data rates of up to 170 kbps can
GPRS (General Packet Radio Service) Networks are currently being
deployed in the market, and there is a need to study performance
and network related issues. Simulations can provide a basis for
evaluation and key decisions for deployment. In this project we
undertake design and implementation of a GPRS simulator.
We focus on the handling of the air interface and the Link Layer,
Radio Link Layer and the Medium Access Layer for the Mobile Station -
Base Station interaction.
We use the ns-Network Simulator as a base for the implementation.
Using the simulator, we then study the effect of load conditions on the average
packet delay and the number of users supported by a GPRS system vs a GSM system.
Distributed archival and retrieval of medical images using DICOM:
Dr. Sheikh Mahmood
Thesis: PDF file [1605 K]. PS file [5378 K].
Slides: PPT [970 K].
The introduction of digital imaging devices in the medical world has made the
management of images easier and more efficient. The Digital Imaging and
Communications in Medicine (DICOM) standard promotes the communication of
digital image information regardless of device manufacturer.
Digital Images in a hospital are managed with Picture Archiving and
Communication System (PACS), which receives, stores and makes these images
available to Radiologists and Physicians for diagnosis and review.
The interface provided to the user, by PACS is proprietary, with each vendor
having separate user interface.
This report describes the implementation and usage of World Wide Web interface
to a DICOM based PACS that allows the user to query, select, move and display
the images available in DICOM archive.
The system supports querying of archive from any workstation (such as Windows,
Mac) that supports a WWW browser.
In order to use the system, the user runs a WWW browser (such as Netscape,
Internet Explorer) and specifies the URL of the server machine. On receiving
the request, the server sends a query form to be displayed on the client. The
query form contains three text input fields for patient name, patient ID and
birthdate. The user may specify any or all the fields as well as wildcards in
the name field.
Once the form is completed, the user clicks a button to submit the request.
The HTML form submits the query to a CGI program that executes on the Unix
server. This program accepts as input the form field values that the user
specified. It then communicates with the archive via DICOM requests to determine
those patients that match the search criteria. The user may then choose a
patient, which in turn causes the studies for this patient to be displayed.
Finally, the user may select a study that causes those images to be retrieved
from the archive and displayed via the Web browser. The result of this system
is an easy to use interface to a DICOM PACS with the option to query or move
images from PACS.
Routing in mobile adhoc networks:
Thesis: PDF file [474 K]. PS file [3314 K].
Slides: PPT [160 K].
Mobile ad hoc networks are infrastructure-less, wireless networks of mobile
nodes that are formed dynamically without much set up time or cost. They are
used where communications infrastructure is unavailable or infeasible to use,
as in battlefield or disaster relief operations.
Routing in such ad-hoc networks is an issue since node mobility makes the
network topology highly dynamic.
This thesis surveys some of the work done in the area of ad hoc routing and
proposes Kelpi, a new routing algorithm. Kelpi is based on the observation that
with most routing algorithms, an intermediate node's movement could cause
routes between distant nodes to be affected. Kelpi is designed to maintain
stable routes that form a dynamic virtual backbone in an ad hoc network. The
principle behind this is to keep routing information confined to a geographical
area despite nodes being mobile. This is done by imposing a cellular structure
on the ad hoc network using an accurate positioning service at every node.
Kelpi also aims to increase link-layer throughput by using multiple levels of
transmission power to reduce radio interference.
An implementation of Kelpi for the Network Simulator ns2 is also described.
Mobile agents for distance evaluation:
Thesis: PDF file [452 K].
Slides: PPT [320 K].
New models for education dissemination have emerged with the growth of
distributed systems, especially with the widespread penetration of Internet.
This has made it possible to impart education on a larger scale.
Distance evaluation of students constitutes a crucial factor for success of
Distance Education initiatives. Such large distributed systems also raise a
number of challenges in terms of design, technologies and their implementations.
Most of the present day systems have client-server architectures. The
client-server model though powerful, has scalability limitations for distance
evaluation systems. Over the past few years, the mobile agent paradigm, which
has emerged as a new approach for structuring distributed applications,
attempts to address many of these concerns.
In this project, we survey the existing mobile agent frameworks to understand
the state of art. We then use the mobile agent approach for designing,
implementing and deploying a system for distance evaluation of students. We
consider the examination process: starting from paper-setting, where the
examiners spread over the internet collaborate to produce a question paper, to
examination conduction, answer- paper evaluation, and ending with result
compilation and publishing.
In this report, we detail our design, implementation and experimentations. We
conclude by presenting our observations, experiences of using mobile agents for
designing large distributed systems. We also list some the challenges that
still need to be tackled and indicate the future directions of our work.
Mobile agents for e-commerce:
Thesis: PDF file [440 K]. PS file [1038 K].
Slides: PPT [508 K].
In the past few years, mobile agent (MA) paradigm has received a great deal of
attention. While MAs have generated considerable excitement in the research
community, they have not translated into a significant number of real-world
applications. One of the main reasons for this is the lack of work that
quantitatively evaluates the effectiveness of mobile agents versus traditional
approaches. This project contributes towards such an evaluation and implements
an e-commerce application using mobile agents.
The existing mobile agent applications in the domain of e-commerce are
classified and the underlying patterns of mobility are identified. These in
turn determine several implementation strategies using the traditional
client-server and the mobile agent paradigms.
The project quantitatively evaluates various implementation strategies and
identifies various application parameters that influence application performance.
The project also provides qualitative and quantitative comparison across three
Java based mobile agent framework viz. Voyager, Aglets, Concordia,
for e-commerce applications.
Finally, we present the implementation and deployment issues of a complete B2C
e-commerce application using mobile agent and messaging and discuss the
software engineering aspects of mobile agent technology.
Automated testing of UML behavioral descriptions:
[co-guide: Prof. S.Ramesh]
Thesis: PDF file [1075 K]. PS file [5119 K].
Slides: PPT [627 K].
The increasing complexity of safety-critical software systems and the costs
associated with software failure has given rise to a need for practical and
effective testing strategies.
Software testing is a critical element of software quality assurance and
represents the ultimate review of specification, design, and coding.
Testing is a dynamic method for verification and validation, wherein the
execution behavior of the given system is observed for several test cases.
The time and cost devoted to testing needs to be managed accurately.
Too often, lack of sufficient testing causes schedule and budget over-runs with
insufficient guarantee of quality.
An effective software testing strategy starts with the implementation of a
component test process. In component testing, critical components (shared,
reusable, or complex) are subjected to functional or black box testing to
ensure that they are functionally correct. Also, code coverage, i.e., the number
of execution paths covered by a given test case, needs to be analyzed to
determine the effectiveness of the testing process.
State machines or statecharts are a common approach for the abstraction of
control and data flow in a wide range of application, such as real-time
applications, networking protocols, GUI design and safety-critical systems.
Statechart is a formalism for visual specification of reactive system behavior.
The formalism extends traditional finite-state machines with notions of
hierarchy and concurrency, and is used in many popular software design
notations. A large part of the appeal of using statecharts for testing, is
due to the intuitive operational interpretation of state machine behavior.
Statecharts are hierarchical in nature and allow one to decompose states into
For systems where tasks are performed under time-constraints, the
misinterpretation of even a single event in state machine's behavior can lead to
collapse the whole underlying system. Hence an efficient testing tool is
mandatory for such systems.
In this report, we discuss various testing techniques, testing strategies
and statechart semantics, using practical examples. We have developed
StateTest, a tool that takes a statechart specification and the test
scripts as the input, and generates the pass or fail report and the coverage
metrics as the output. Presently, StateTest can test the behavior of traditional
as well as hierarchical statecharts. The tool can also be extended to test the
statecharts that have orthogonal (concurrent execution) as well as history
Sequential and parallel reachability analysis of concurrent Java programs:
[IIT-G, co-guide: Prof. G. Sajith]
Thesis: PDF file [415 K]. PS file [481 K].
Slides: PPT [276 K].
Concurrent programs are more difficult to test than sequential programs because
of their non-deterministic behaviour.
Reachability analysis is an important and well known tool for static analysis
of concurrent programs. It involves the systematic enumeration of all possible
global states of program execution. However, traditional algorithms to generate
all reachable states of a concurrent program have exponential time and space
Apportioning is a technique based on the idea of classification of program
analysis points as local(i.e. having influence within a class) and global
(i.e. having influence outside a class). Apportioning uses this classification
to abstract away some analysis points, thus reducing the size of the
reachability graphs generated.
This report presents two algorithms for the generation of reachability
graphs for a concurrent Java program. The algorithms are implemented for some
apportioning-based reachability analysis tools. The results generated are used
to verify the efficiency of apportioning as a tool for reachability analysis of
concurrent Java programs.
The first algorithm is a sequential implementation of the apportioning
technique, while the second generates the reachability graphs in parallel.
While the sequential algorithm demonstrates the reduction in the space
complexity of a reachability graph generation, the second algorithm attempts to
mitigate the time complexity.
||Comparison of routing protocols for mobile adhoc networks
||Process migration support in Linux
||Reachability graph generation of concurrent Java programs
||Generation of control-flow graphs for Java programs
||Policy based management for enterprise networks
||Sameer S Chabungbam
In addition to the above, a large number of students from other colleges have worked with me for their final year projects.
The numbers increased so rapidly that eventually I gave up trying to keep track of these projects.
Approximately, it is an average of 10 students (3 group projects) per year.
| Last Updated: June 2016