point
Menu
Magazines
Browse by year:
High Profit In Emerging Information Technologies
Friday, October 1, 1999



“If the aircraft industry had evolved as spectacularly as the computer industry over the past 25 years, a Boeing 767 would cost $500 today, and it would circle the globe in 20 minutes on five gallons (twenty litres) of fuel.”

From Scientific American, December 1982; “Personal Computers” by Hoo-min D. Toong and Amar Gupta

Some information resides on various makes and types of computers, some exists on paper, and some can only be accessed through personal interactions. The overhead involved in managing and integrating these pieces of information is a major barrier to enhancing productivity.
No single organization, however, can design solutions that encompass all these issues. Industry, government, and academia must work in concert to develop approaches that can surmount strategic, organizational and technical barriers to the effective deployment of information technology. This concerted approach is emphasized by the PROFIT (Productivity From Information Technology) Initiative. PROFIT was established in 1992 by a group of researchers to define new processes and technologies required to gain greater productivity from IT in both the private and public sectors. PROFIT’s goal is to enhance productivity in areas ranging from finance to transportation, and from manufacturing to telecommunications.
PROFIT researchers have created transferable technology for lifting information automatically from paper. Consider this: In the US alone, more than 65 billion bank checks are written each year. The PROFIT research team has developed a prototype system that can read bank checks and other paper documents quickly and accurately. The system first scans the check and then focuses on the handwritten numbers by using a neural network trained to actually identify the numbers. Tests conducted with hundreds of checks deliberately written to confound the software show that it can correctly recognize strings of numbers more than 85 percent of the time. This level of accuracy is high enough that banks in the US, Britain and Japan have acquired copies of the software for testing the system. The organization is also working with the leading system integration house in Brazil to evaluate the efficacy of using this system on a nationwide basis there.
In parallel, the MIT Technology Licensing Office is currently trying to license PROFIT’s data/knowledge acquisition techniques to interested organizations. This technology formed the basis for RapidVision, an endeavor that was one of the semi-finalists in the MIT 50K contest held in 1998, a contest held every year to identify the most promising entrepreneurial opportunities. (Incidentally, the first coordinator for research in this data/knowledge acquisition area was Dr. Ronjon Nag, who subsequently founded Lexicus, which was later acquired by Motorola Corporation.)

The Check’s Not in the Mail
The type of system described above not only could reduce personnel costs associated with check cashing, but it could help streamline the general system of check processing. Today, travelers within the US who give a check issued on a bank in Boston to a dealer based in California, the dealer gives the check to a local bank. The bank forwards the check to the branch of the Federal Reserve, which may be located hundreds of miles away. This branch sends the check through an overnight courier service to the branch of the Federal Reserve Bank in Boston, who gives it to a lead bank, who in turn gives it to the customer’s local bank, and who in turn sends the customer the physical check. The total societal cost of processing a single check has been estimated to exceed $1.20; sending electronic images of scanned checks instead would save millions of dollars in annual postage costs alone.
The same technology can have many applications, ranging from insurance processing forms to healthcare practitioner correspondence. The growing proliferation of the Internet is opening up new lost-cost options for data/knowledge dissemination. And the specifications for the Clinton-Gore administration’s “Next Generation Internet Initiative” include the ability to transmit all the text and the pictures within a full set of the Encyclopedia Britannica in less than one second!

Divergent Data
A third area of opportunity relates to knowledge management, which focuses on the integration of disparate data sources. One aspect of this effort relates to integrating islands of information systems that characterize virtually all large organizations. The number of these islands, as well as their size, has grown over the years as organizations have invested in more and more computer systems to support their growing reliance on computerized data. This dearth of data has made the problem of integration more pronounced and complex.
The advent of the Internet allows us to receive data from numerous sources around the world. For example, if you happen to be in India and want to check the weather in Boston on December 1, an online source will probably inform you that the temperature is 30 degrees—a very hot day in Boston in December, indeed! In reality, the temperature is 30 degrees Fahrenheit, just the beginning of a four-month period of sub-freezing point weather! Over time, one is increasingly called upon to reconcile similar contextual differences between and across organizations. For example, receivers of data from stock exchanges around the world must be aware of what currency the shares are quoted in, as well as other idiosyncrasies of the data source; while most stock exchanges quote amounts in absolute terms, there are stock exchanges that quote share prices in terms of percentages of the original offering price.
By designing and implementing automated techniques to handle incoming data from intra-organizational and inter-organizational sources, one can attain significant payoffs. This is especially true of areas like supply chain management which, by definition, involve multiple data sources, significant amounts of data, and the need for quick decision-making.

Virtual Conferencing
A related aspect of knowledge management focuses on the methods of knowledge dissemination between virtual teams in electronic meetings using generic computers. Two PROFIT staff members, Sanjeev Vadhavkar and Karim Hussein, have been pursuing Collaborative Distributive Integrated (CDI) software. Today’s business processes require interaction among persons separated across geographical, organizational and cultural boundaries. For example, a construction project may require meetings between the owner, the architect, the contractor and the engineer. Traditionally, this occurs in one of two ways: either by relocating the entire team to a central facility where they can collaborate over the duration of the project; or by passing of project documents and products between the team members. Both these methods are an imperfect use of human expertise and organizational resources, and also yield poor quality of the end products.
The CDI approach mitigates these problems by providing an Internet-based collaboration backbone that enables participants to attend virtual conferences, configured around generic hardware and software. A communication control model has been developed to support both individual interaction as well as provide process control capabilities for group interactions occurring in typical distributed meeting sessions over the Internet. This approach has already been utilized in an academic course taught over Internet simultaneously at MIT and in Centro de Investigacion Cientifica y Educacion Superior de Ensenada (CICESE), Mexico. In addition, the approach is currently being beta tested at several companies, such as Kajima Corporation, and defense planning projects, such as those at Draper Laboratory.

Upgrading Data into Knowledge
A fourth area, possibly the most promising, is the area of knowledge discovery. PROFIT has worked with a large decentralized organization that is involved in selling 5,000 different medical products through 2,000 outlets. This organization has traditionally emphasized that if a random customer walked into any physical store on any day for any item, the probability of finding the particular item in stock would exceed 95 percent. In order to attain this target, the organization was carrying over $1 billion in inventory on a continuing basis. Using neural network-based data mining techniques, PROFIT produced a solution that the same probability could be obtained with only half-a-billion dollars of inventory, a reduction of fifty percent. This inventory optimization model is now being extended to other fields, such as to manage inventory of financial securities and to optimize continuous processes.
By undertaking a systematic approach embodying all the four facets, one can derive huge benefits from emerging information technologies. The ability to acquire information, distill information and act on it in short periods of time will ultimately determine which businesses will succeed in a “leaner and meaner” global business environment.


Twitter
Share on LinkedIn
facebook