Operations Research Applications in the field of Information and Communication Technologies
This post may not make sense to most of my readers out there.But I had spent lot,hell lot of time, in gathering information for this particular assignment. Thought of posting it,so that somebody else will find it easier in future..
Operations Research Applications in the field of Information and Communication Technologies
Abstract: Traditionally, Operations Research is the scientific study of logistic networks to provide for decision support at all levels in order to optimize production and distribution of the commodity flows. Nowadays, these logistic networks have become very large and may range over several countries, while the demands for quality of service have grown similarly to ever higher standards. Generally one agrees that to maintain such large networks successfully, one needs the control of all the information flows through the network, that is, continuous information on the status of the resources. In this sense one could say that Operations Research and Information Technology has joined together. This journal aims to analyze several aspects and problems that arise in such modern logistic and communication networks.
Key Words: Simulation, Optimization, Game Theory, Pattern Recognition, Queuing, Network Routing, Stochastic network, stochastic simulation, Transportation, Data Mining.
1.1 Operations Research- an overview
Management science, or operations research, is a specialized discipline for business decision-making. The term Operations Research (OR) describes the discipline that is focused on the application of information technology for informed decision-making. In other words, OR represents the study of optimal resource allocation. The goal of OR is to provide rational bases for decision making by seeking to understand and structure complex situations, and to utilize this understanding to predict system behavior and improve system performance. Much of the actual work is conducted by using analytical and numerical techniques to develop and manipulate mathematical models of organizational systems that are composed of people, machines, and procedures. OR involves solving problems that have complex structural, operational and investment dimensions, involving the allocation and scheduling of resources. A typical approach might include:
- Defining the problem, including identifying the absolute requirements and the objectives to be achieved, identifying what information is required and/or available, and what form the answer should take;
- Breaking the problem down into logical elements that can be analyzed or “solved”;
- Solving the problem using the most appropriate analytical technique; and
- Offering insights into the problem, such as determining the sensitivity of the outcomes to inputs and determining the value of additional information.
2.0 OPERATIONS RESEARCH IN INFORMATION TECHNOLOGY
The operations research modeling has been a very difficult task. But by the arrival of information technology and the usage of computers in modeling the job has become much easier. The practice of OR involves a major activity in problem formalization and model construction and validation; other activities include a computational part, analysis of solutions, arriving at conclusions, and implementation of the decision. The increased computing power has stimulated large-scale use of mathematical programming models for planning and on line control. Databases and computer networks make reliable up-to-date data available for more effective decision making. TORA and SIMNET II are examples of software packages in OR. Some problems are “one-off”; others require an on-going solution. Where an on-going solution is required, we are able to deliver a framework for solving the problem in the future and, if necessary, a computerized solution integrated into an organization’s existing information infrastructure
Some of the possible solution techniques include:
A computer simulation, a computer model, or a computational model is a computer program, or network of computers, that attempts to simulate an abstract model of a particular system. Computer simulations have become a useful part of mathematical modeling of many natural systems in physics (computational physics), astrophysics, chemistry and biology, human systems in economics, psychology, and social science and in the process of engineering new technology, to gain insight into the operation of those.
Computer simulations are used in a wide variety of practical contexts, such as:
- analysis of air pollutant dispersion using atmospheric dispersion modeling
- design of complex systems such as aircraft and also logistics systems.
- design of Noise barriers to effect roadway noise mitigation
- flight simulators to train pilots
- weather forecasting
- forecasting of prices on financial markets (for example Adaptive Modeler)
- behavior of structures (such as buildings and industrial parts) under stress and other conditions
- design of industrial processes, such as chemical processing plants
- strategic Management and Organizational Studies
- reservoir simulation for the petroleum engineering to model the subsurface reservoir
2.2 Network Flow Programming
The term network flow program describes a type of model that is a special case of the more general linear program. The class of network flow programs includes such problems as the transportation problem, the assignment problem, the shortest path problem, the maximum flow problem, the pure minimum cost flow problem, and the generalized minimum cost flow problem. It is an important class because many aspects of actual situations are readily recognized as networks and the representation of the model is much more compact than the general linear program. When a situation can be entirely modeled as a network, very efficient algorithms exist for the solution of the optimization problem, many times more efficient than linear programming in the utilization of computer time and space resources. Network models are constructed by the Math Programming add-in and may be solved by either by the Excel Solver, Jensen LP/IP Solver or the Jensen Network Solver.
2.3 Data Mining
Generally, data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information – information that can be used to increase revenue, cuts costs, or both. Data mining software is one of a number of analytical tools for analyzing data. It allows users to analyze data from many different dimensions or angles, categorize it, and summarize the relationships identified. Technically, data mining is the process of finding correlations or patterns among dozens of fields in large relational databases. Data mining is primarily used today by companies with a strong consumer focus – retail, financial, communication, and marketing organizations. It enables these companies to determine relationships among “internal” factors such as price, product positioning, or staff skills, and “external” factors such as economic indicators, competition, and customer demographics. And, it enables them to determine the impact on sales, customer satisfaction, and corporate profits. Finally, it enables them to “drill down” into summary information to view detail transactional data
Data mining consists of five major elements:
- • Extract, transform, and load transaction data onto the data warehouse system.
- • Store and manage the data in a multidimensional database system.
- • Provide data access to business analysts and information technology professionals.
- • Analyze the data by application software.
- • Present the data in a useful format, such as a graph or table.
Different levels of analysis are available:
• Artificial neural networks: Non-linear predictive models that learn through training and resemble biological neural networks in structure.
• Genetic algorithms: Optimization techniques that use processes such as genetic combination, mutation, and natural selection in a design based on the concepts of natural evolution.
• Decision trees: Tree-shaped structures that represent sets of decisions. These decisions generate rules for the classification of a dataset. Specific decision tree methods include Classification and Regression Trees (CART) and Chi Square Automatic Interaction Detection (CHAID) . CART and CHAID are decision tree techniques used for classification of a dataset. They provide a set of rules that you can apply to a new (unclassified) dataset to predict which records will have a given outcome. CART segments a dataset by creating 2-way splits while CHAID segments using chi square tests to create multi-way splits. CART typically requires less data preparation than CHAID.
• Nearest neighbor method: A technique that classifies each record in a dataset based on a combination of the classes of the k record(s) most similar to it in a historical dataset (where k 1). Sometimes called the k-nearest neighbor technique.
• Rule induction: The extraction of useful if-then rules from data based on statistical significance.
• Data visualization: The visual interpretation of complex relationships in multidimensional data. Graphics tools are used to illustrate data relationships
2.4 Pattern Recognition
A complete pattern recognition system consists of a sensor that gathers the observations to be classified or described, a feature extraction mechanism that computes numeric or symbolic information from the observations, and a classification or description scheme that does the actual job of classifying or describing observations, relying on the extracted features.
The classification or description scheme is usually based on the availability of a set of patterns that have already been classified or described. This set of patterns is termed the training set, and the resulting learning strategy is characterized as supervised learning. Learning can also be unsupervised, in the sense that the system is not given an a priori labeling of patterns, instead it itself establishes the classes based on the statistical regularities of the patterns.
The classification or description scheme usually uses one of the following approaches: statistical (or decision theoretic) or syntactic (or structural). Statistical pattern recognition is based on statistical characterizations of patterns, assuming that the patterns are generated by a probabilistic system. Syntactical (or structural) pattern recognition is based on the structural interrelationships of features. A wide range of algorithms can be applied for pattern recognition, from simple naive Bayes classifiers and neural networks to the powerful KNN decision rules.
2.5 Stochastic simulation
Stochastic simulation algorithms and methods were initially developed to analyze chemical reactions involving large numbers of species with complex reaction kinetics. stochastic networks. These are networks of entities, with particles residing in and moving between these entities according to stochastic processes. A key example is a queuing network, where the entities are service facilities and the particles customers.
In the design of computer, communication, and manufacturing systems, the most important criterion presently is quality of service, in relation to the costs of the system. The quality of service is expressed in terms of performance and reliability of the systems in relation to their applications. Stochastic networks provide the mathematical models for the description and analysis of these systems. Technological developments have in recent years led to new forms of the processing, storage and transmission of information, and have changed considerably the way companies are organized. In its turn, this has given rise to a plethora of new and challenging problems in the analysis and control of stochasticnetworks.
2.6 Network routing
Network routing, a critical element of network management, consists of the decision rules to connect the pairs of origins and destinations in order to communicate at a given rate on a given topology with fixed link capacities. In the hierarchy of decision problems that dominate network management, routing stands between network design (where topology, facility location and capacity assignment are considered under long-term strategic objectives) and flow control (where traffic is dynamically organized under short-term operational objectives at each switch and router along the routes specified by the routing module). These decision levels are strongly interconnected, giving rise to integrated optimization models at each interface, such as capacity and flow assignment or routing under quality of service constraints
2.7 Stochastic Optimization
Stochastic optimization algorithms have been growing rapidly in popularity over the last decade or two, with a number of methods now becoming ”industry standard” approaches for solving challenging optimization problems. In short, while classical deterministic optimization methods (linear and nonlinear programming) are effective for a range of problems, stochastic methods are able to handle many of the problems for which deterministic methods are inappropriate. Stochastic optimization refers to the minimization (or maximization) of a function in the presence of randomness in the optimization process. The randomness may be present as either noise in measurements or Monte Carlo randomness in the search procedure, or both
2.8 Queuing Theory
Queueing Theory tries to answer questions like e.g. the mean waiting time in the queue, the mean system response time (waiting time in the queue plus service times), mean utilization of the service facility, distribution of the number of customers in the queue, distribution of the number of customers in the system and so forth. These questions are mainly investigated in a stochastic scenario, where e.g. the inter-arrival times of the customers or the service times are assumed to be random.
All information from given links downloaded as on 29-01-2010.