KaaShiv InfoTech, Number 1 Inplant Training Experts in Chennai.
We propose a novel method for automatic annotation, indexing and annotation based retrieval of images. The new method, that we call Markovian Semantic Indexing (MSI), is presented in the context of an online image retrieval system. Assuming such a system, the users’ queries are used to construct an Aggregate Markov Chain (AMC) through which the relevance between the keywords seen by the system is defined. The users’ queries are also used to automatically annotate the images. A stochastic distance between images, based on their annotation and the keyword relevance captured in the AMC, is then introduced. Geometric interpretations of the proposed distance are provided and its relation to a clustering in the keyword space is investigated. By means of a new measure of Markovian state similarity, the mean first cross passage time (CPT), optimality properties of the proposed distance are proved. Images are modeled as points in a vector space and their similarity is measured with MSI. The new method is shown to possess certain theoretical advantages and also to achieve better Precision versus Recall results when compared to Latent Semantic Indexing (LSI) and probabilistic Latent Semantic Indexing (pLSI) methods in Annotation-Based Image Retrieval (ABIR) tasks.
A multi-modal approach is employed by utilizing the textual content and meta data information to gather high-quality images from the web is the core idea of this project. Candidate images are retrieved by a text based web search querying on the object identiﬁer. The images will be retrieved from the web pages and an option of removing irrelevant images and re-rank the remainder is introduced. The major intension of this project is to improve user satisfaction rate by returning the images that have a higher probability to be accepted (downloaded) by the user. The assumption is that, if the user searches an image by issuing queries, each individual text query being an ordered set of keywords. The system will respond with a list of sites which is relevant to the searched keyword. On automatic crawling of website, the user can download or ignore the returned images and issue a new query instead. During the training phase of the system, the images wont have any annotation information relevant to the images. As the users issue queries and pick images the system annotates the images in an automatic manner and at the same time establishes relevance relations between the keywords. No need of providing specific annotation for the images explicitly, an automatic relevance mapping algorithm is implemented which will take care of mapping the user vs images implicitly. This happens by the system transparently from the user. At the testing phase the system uses the annotations available from the training phase but also the keyword relevance probability weights also evaluated during the training phase to return images that better reflect the users preferences and acquire user satisfaction. In our proposed system, we are implementing all the conceptual information provided and in addition, we are trying to integrate the ranking features. So that an automatic ranking on the images will be done based on the meta data features of an image.In addition, the user will be provided an option of ranking the site manually.
The author suggests a new focus on benchmarks for knowledge systems, following the lines of similar benchmarks in other computing fields. It is noted that knowledge systems differ from conventional systems in a key way, namely their ability to interpret and apply knowledge. This gives rise to a distinction between intrinsic measures concerned with engineering qualities and extrinsic measures relating to task productivity, and both warrant improved measurement techniques. Primary concerns within the extrinsic realm include advice quality, reasoning correctness, robustness, and solution efficiency. Intrinsic concerns, on the other hand, centre on elegance of knowledge base design, modularity, and architecture. The author suggests criteria for good measures and benchmarks, and ways to satisfy these through the design of knowledge and key knowledge engineering costs and performance parameters. It is suggest that the focus on measuring knowledge systems should help clarify the technical relationships between knowledge engineering and data engineering
The authors provide an overview of the current research and development directions in knowledge and data engineering . They classify research problems and approaches in this area and discuss future trends. Research on knowledge and data engineering is examined with respect to programmability and representation, design tradeoffs, algorithms and control, and emerging technologies. Future challenges are considered with respect to software and hardware architecture and system design. The paper serves as an introduction to this first issue of a new quarter
The paper describes a generic approach for data reverse engineering . This approach considers a data reverse engineering procedure to be composed of a source language, a data model and an executable process. It takes into account application domain knowledge. The GENRETRO tool, supporting this approach, is process centered and considers two levels of interaction: one for experts and the other for analysts. The first level can be used to introduce data reverse engineering procedures and application domain knowledge. The second is used to guide the data reverse engineering process. The extracted information is stored in the data dictionary of a software engineering environment
Collaborative design is a process of data continuous generation and management. How to improve efficiency of data storage and management will influence design quality and design period. Firstly, based on the analysis of engineering design system in data storage, a model for data representation of collaborative design system is presented. Secondly, two kinds of data pre-define storage mode including interactive mode and knowledge-based mode are studied. Lastly, pre-define model of data storage is validated through an engineering project
A common problem in many public and private organizations is the engineering of enterprise knowledge related to data and business processes. We address this problem in the framework of distributed information systems of public administration required to reengineer data and processes to provide integrated services to citizens. The paper presents a reference architecture where enterprise data and process knowledge is organized at a global level to achieve interoperability of applications and service provision. In the architecture, distributed data sources are unified by means of a semantic dictionary, while distributed cooperating processes are modelled as inter-organizational workflows
In aerospace engineering , simulation is a key technology. Examples are pre-design studies, optimization, systems simulation, or mission simulations of aircrafts and space vehicles. These kinds of complex simulations need two distinct technologies. First, highly sophisticated simulation codes for each involved discipline (for example, codes for computational fluid dynamics, structural analysis, or flight mechanics) to simulate the various physical effects. Secondly, a simulation infrastructure and well-designed supporting tools to work effectively with all simulation codes. This paper focuses on the infrastructure and the supporting tools, especially for managing both the data resulting from large-scale simulation and the necessary knowledge for conducting complex simulation tasks. Examples of recent developments at the German Aerospace Center in the fields of data and knowledge management to support aerospace research by e-Science technologies are presented.
A manufacturing plant process control system generally has hierarchical layers of control and involves many elements of technology. A discussion is presented of the requirements and design issues for manufacturing process control (MPC). Much of the discussion focuses on process-control technology as it relates to system design
Design objects in CAD applications have versions and participate in the construction of other more complex design objects. The author describes data model aspects of an experimental database system for CAD applications called Pegasus. The model is based on previously published work on extensible and object-oriented database systems. The novel idea of Pegasus is the reconciliation of two subtyping (inheritance) mechanisms: the first, called refinement, is based on the usual semantics of schema copying; the second, called extension, is based on the inheritance semantics between prototypes and their extensions. The author uses these modeling elements to show how generic and version objects as well as component occurrences of (generic or version) components can be modeled.
Access control mechanisms protect sensitive information from unauthorized users. However, when sensitive information is shared and a Privacy Protection Mechanism (PPM) is not in place, an authorized user can still compromise the privacy of a person leading to identity disclosure. A PPM can use suppression and generalization of relational data to anonymize and satisfy privacy requirements, e.g., k-anonymity and l-diversity, against identity and attribute disclosure. However, privacy is achieved at the cost of precision of authorized information. In this paper, we propose an accuracy-constrained privacy-preserving access control framework. The access control policies define selection predicates available to roles while the privacy requirement is to satisfy the k-anonymity or l-diversity. An additional constraint that needs to be satisfied by the PPM is the imprecision bound for each selection predicate. The techniques for workload-aware anonymization for selection predicates have been discussed in the literature. However, to the best of our knowledge, the problem of satisfying the accuracy constraints for multiple roles has not been studied before. In our formulation of the aforementioned problem, we propose heuristics for anonymization algorithms and show empirically that the proposed approach satisfies imprecision bounds for more permissions and has lower total imprecision than the current state of the art.
Knowledge discovery from scientific articles has received increasing attention recently since huge repositories are made available by the development of the Internet and digital databases. In a corpus of scientific articles such as a digital library, documents are connected by citations and one document plays two different roles in the corpus: document itself and a citation of other documents. In the existing topic models, little effort is made to differentiate these two roles. We believe that the topic distributions of these two roles are different and related in a certain way. In this paper, we propose a Bernoulli process topic (BPT) model which considers the corpus at two levels: document level and citation level. In the BPT model, each document has two different representations in the latent topic space associated with its roles. Moreover, the multi-level hierarchical structure of citation network is captured by a generative process involving a Bernoulli process. The distribution parameters of the BPT model are estimated by a variational approximation approach. An efficient computation algorithm is proposed to overcome the difficulty of matrix inverse operation. In addition to conducting the experimental evaluations on the document modeling and document clustering tasks, we also apply the BPT model to well known corpora to discover the latent topics, recommend important citations, detect the trends of various research areas in computer science between 1991 and 1998, and to investigate the interactions among the research areas. The comparisons against state-of-the-art methods demonstrate a very promising performance. The implementations and the data sets are available online .
Conventional spatial queries, such as range search and nearest neighbor retrieval, involve only conditions on objects' geometric properties. Today, many modern applications call for novel forms of queries that aim to find objects satisfying both a spatial predicate, and a predicate on their associated texts. For example, instead of considering all the restaurants, a nearest neighbor query would instead ask for the restaurant that is the closest among those whose menus contain “steak, spaghetti, brandy” all at the same time. Currently, the best solution to such queries is based on the IR 2-tree, which, as shown in this paper, has a few deficiencies that seriously impact its efficiency. Motivated by this, we develop a new access method called the spatial inverted index that extends the conventional inverted index to cope with multidimensional data, and comes with algorithms that can answer nearest neighbor queries with keywords in real time. As verified by experiments, the proposed techniques outperform the IR 2-tree in query response time significantly, often by a factor of orders of magnitude.
We identify relation completion (RC) as one recurring problem that is central to the success of novel big data applications such as Entity Reconstruction and Data Enrichment. Given a semantic relation ℜ, RC attempts at linking entity pairs between two entity lists under the relation ℜ. To accomplish the RC goals, we propose to formulate search queries for each query entity α based on some auxiliary information, so that to detect its target entity β from the set of retrieved documents. For instance, a pattern-based method (PaRE) uses extracted patterns as the auxiliary information in formulating search queries. However, high-quality patterns may decrease the probability of finding suitable target entities. As an alternative, we propose CoRE method that uses context terms learned surrounding the expression of a relation as the auxiliary information in formulating queries. The experimental results based on several real-world web data collections demonstrate that CoRE reaches a much higher accuracy than PaRE for the purpose of RC.
Traditional active learning methods require the labeler to provide a class label for each queried instance. The labelers are normally highly skilled domain experts to ensure the correctness of the provided labels, which in turn results in expensive labeling cost. To reduce labeling cost, an alternative solution is to allow nonexpert labelers to carry out the labeling task without explicitly telling the class label of each queried instance. In this paper, we propose a new active learning paradigm, in which a nonexpert labeler is only asked “whether a pair of instances belong to the same class”, namely, a pair wise label homogeneity. Under such circumstances, our active learning goal is twofold: (1) decide which pair of instances should be selected for query, and (2) how to make use of the pair wise homogeneity information to improve the active learner. To achieve the goal, we propose a “Pair wise Query on Max-flow Paths” strategy to query pair wise label homogeneity from a non expert labeler, whose query results are further used to dynamically update a Min-cut model (to differentiate instances in different classes). In addition, a “Confidence-based Data Selection” measure is used to evaluate data utility based on the Min-cut model's prediction results. The selected instances, with inferred class labels, are included into the labeled set to form a closed-loop active learning process. Experimental results and comparisons with state-of-the-art methods demonstrate that our new active learning paradigm can result in good performance with nonexpert labelers.
We describe an approach to usage based Web personalization taking into account both the offline tasks related to the mining of usage data, and the online process of automatic Web page customization based on the mined knowledge. Specifically, we propose an effective technique for capturing common user profiles based on association rule discovery and usage based clustering. We also propose techniques for combining this knowledge with the current status of an ongoing Web activity to perform real time personalization. Finally, we provide an experimental evaluation of the proposed techniques using real Web usage data
With the explosive growth of the WWW, most searches retrieve a large number of documents. The query results returned have no aggregate structure; thus, it is difficult to forage for relevant pages. We argue that organization of Web query results is essential to the user in terms of usability of query results. In addition, how the query results are organized has a great impact on the ranking scheme and query processing as well as search engine usability for the users. We present various schemes on ranking and organizing Web query results in the scope of the Net Topix search engine project in NEC Research Laboratories in San Jose. We describe three associated tasks: full-text search and initial ranking, post query processing, and query result reorganization for presentation. We discuss our consideration and pros and cons of each design alternative
As an alternative to search capability, many search engines are providing directory servers containing categorized Web documents for users to navigate and browse through. We are investigating three issues in portal site construction given a large collection of categorized Web documents: (1) distillation of important topics for each category of documents; (2) distillation of important documents/sites for these topics; and (3) automation of these two tasks. We have developed an automated technique for topics and Web site distillation. Our technique integrates Web document content analysis and link structure analysis. It considers local importance of keywords and their global distribution statistics on a given Web document category hierarchy
In recent years, there has been much work on the retrieval of information from multimedia repositories. In particular, researchers have shown a great deal of interest in merging the results returned by multiple multimedia repositories into a single result. This task is challenging due to the heterogeneities among such systems, e.g. differences in the indexing methods and similarity functions employed by individual systems. In this paper, we define a declarative language for expressing queries that combine results from multiple similarity computations. We demonstrate the unique characteristics of this language and show how it provides for interesting and novel similarity queries. Numerous examples are given throughout
Data products (macro data or tabular data and micro-data or raw data records), are designed to inform public or business policy, and research or public information. Securing these products against unauthorized accesses has been a long-term goal of the database security research community and the government statistical agencies. Solutions to this problem require combining several techniques and mechanisms. Recent advances in data mining and machine learning algorithms have, however, increased the security risks one may incur when releasing data for mining from outside parties. Issues related to data mining and security have been recognized and investigated only recently. This paper deals with the problem of limiting disclosure of sensitive rules. In particular it is attempted to selectively hide some frequent item sets from large databases with as little as possible impact on other non-sensitive frequent item sets. Frequent item sets are sets of items that appear in the database “frequently enough” and identifying them is usually the first step toward association/correlation rule or sequential pattern mining. Experimental results are presented along with some theoretical issues related to this problem
The diversity and availability of information sources on the World Wide Web has set the stage for integration and reuse at an unparalleled scale. There remain obstacles to exploiting the extent of the Web's resources in a consistent, scalable and maintainable fashion. The autonomy and volatility of Web sources complicates maintaining wrappers consistent with the requirements of the data's target application. Also, the sources' semantic heterogeneity requires practical methods to mediate their contents. This paper presents an algebra for semi structured data. This algebra is the tool we use to develop wrappers, and mediate their semantic content. We describe wrapper refinement and maintenance as the process of developing a congruity measure between source data sets and their target application. This measure expresses explicitly the context within which source data is relevant for its target use. Enabling mediation between wrappers corresponds to establishing an articulation between data sources through a similarity measure. Similarity measures encapsulate conditions under which sources may be used together
Enterprise databases comprise multiple local databases that exchange information. The component databases will rarely have the same native form, so one must map between the native interfaces of the data suppliers and the recipients. An SQL view is a convenient and powerful way to define this map, because it provides not just an evaluation mechanism, but also query, and (to some degree) update and trigger capabilities. However, SQL views do not map the critical metadata (e.g., security, source attribution, and quality information) between data suppliers and their recipients. The paper examines the research problems arising from the creation of a metadata propagation framework: a theory for inferring and reconciling metadata on views, rules for each property type and data derivation operator, efficient implementation, and ways to coordinate metadata administration across multiple schemas
This paper will provide information about the benefits of the closed loop knowledge system (CLKS). CLKS is based on a robust warehouse-based data repository and support tools where data access, storage and predictive data mining exploitation can be obtained from a web interface which will enable better decision support for war fighters. Knowledge engineering and analysis of the data required to maintain military aircraft, including supported logistics elements, is complex and time consuming. Access to system requirements (process flows, schematics, etc) needed to support the development and use of smart diagnostics can be minimal. Historical data can provide a valuable resource for supporting integrated diagnostics including development, sustainment, maturation, knowledge transfer, and troubleshooting. Effective use of this information will help evolve the traditional support equipment role into a broader support systems knowledge engineer function. The major benefit of a CLKS is easy access to aircraft data needed to support various user analysis needs. Capabilities include diagnostics authoring, failure monitoring, maintenance adjusting, and technician/analyst support at all maintenance levels. The CLKS provides a solid foundation for the initial authoring and later maturing of fault isolation and diagnostics. Data warehousing, reporting and mining capability, coupled with an organized diagnostics authoring system, will help verify and validate rules needed to drive troubleshooting and maintenance support. As historical data is captured and analyzed, improvements to aircraft system hardware and software can be identified as well as directing maintenance during the troubleshooting process. Having access to applicable engineering data at the time of need will: decrease troubleshooting time on production aircraft; increase the ability of the technical user to better understand the diagnostics; reduce ambiguities which drive false removals of system components; decrease misallocated spares; and increase knowledge management. The CLKS keeps every engineer informed and maintains a baseline of domain information important to the success of the team.
Summary form only given. The rapid evolution of tools and software systems to design experiments, automatically monitor, collect and warehouse large amounts of data, from applications such as life sciences and industrial processes has resulted in a new paradigm shift. This change of paradigm is so fast that some of the practices for optimization and management of these processes that were valid only 5–10 years ago may no longer be fully acceptable or sufficient for today's business optimization and management. This has a direct influence on the best practices for knowledge discovery and management of the discovered knowledge in real-world data mining applications. Establishing and managing a real-world data mining project in any domain, in particular in today's life science industry, is not a trivial task. A few approaches have been proposed in the literature. However, initiation and successful management of such efforts may depend on where a given case study fits in the overall classification of data mining approaches. Today's knowledge discovery from data can be classified in several ways: (i) data mining on engineered systems (e.g. complex equipment) or systems designed by nature (e.g. life sciences), (ii) explanatory or predictive data mining, (iii) data mining from static data(e.g. data warehouse) or dynamic data (e.g. data streams), (iv) user operated or automated datamining. There could still be other ways to classify data mining applications. This talk provides an overview of the above listed knowledge discovery applications. We provide examples where we demonstrate how small or large amounts of data, when understood from a real-world data mining point of view and the required data is properly integrated, can result in novel knowledge discovery case studies. We explain motivations and challenges of establishing real-world.
Graduate students interested in academic careers welcome opportunities to discuss issues related to teaching and learning. Experience shows that these students are curious not only about balancing research and teaching, understanding the application and interview processes, and maneuvering through the tenure process, but also about designing curriculum and assessing student learning. To guide students as they learn how to design a lesson, course, and entire curriculum and the corresponding assessment tools, a knowledge of educational theories is helpful. But how helpful? How best to share educational theories so that students understand them and can apply them appropriately? Does knowledge of educational theories lead to more confidence as a "curriculum leader?" How does this quality affect the academic career process? How does this knowledge help develop skills in curriculum leadership? The authors of this paper focused on these questions in a graduate seminar titled, "Teaching Science and engineering " during spring, 2002 at the University of Wisconsin-Madison. Courter and colleagues developed this course three years ago. This spring she added Heywood's text, "Curriculum, Instruction and Leadership in engineering Education" and assessed students' reactions to the depth of the text and especially of the educational theories presented. This text takes them beyond simple packaged data as might be presented in a short workshop to a more in-depth understanding of educational theory.
Increasing functionality and complexity of technical products result in complex damage symptoms and failure modes within the customer use phase. Especially in automotive industry, complex damage symptoms are often the result of multiple failure modes. Therefore the importance of continuous field product observation and field data analysis is an important way for analyzing product reliability in the use phase. For the manufacturer, the goals are the accurate and economic identification of possible failure modes and the knowledge of the product failure behavior based on field complaints at an early stage after product market launch. In addition to statistical reliability analysis based on field data (e.g. Weibull distribution analysis, RAW Concept), the technical analysis of damaged field components is an essential point for the detection and verification of individual failure modes. Especially for this technical analysis, the manufacturer needs damaged field components, which represent the whole supposed failure spectrum in the field. The goals for the manufacturer include: (1) Early and detailed detection and identification of critical failure modes, with the objective of targeted response to critical failure modes (2) Conjunction of requirements for the determination of the regress rate and detection of the critical failures in a comprehensive approach (cf. chapter 2) (3) Optimization of economic aspects of the reliability analysis in terms of sampling procedure, sampling analysis and technical analysis costs (4) Integration into existing technical analysis processes to establish and support an industry-specific standard The temporal aspect results in reduced field monitoring periods and therefore results in small damaged field part and data volumes. This requirement restricts the use of parametrical statistical methods and requires the use of nonparametric statistical methods. The results of field data and technical analysis generate the basis for a targeted roll- out of further actions, for example field failure rectification or product optimization. Moreover the initiation of concentrated development-approaches /-strategies for failure prevention with respect to the subsequent product generations (e.g.: COP strategy) is feasible. An industry-wide or cross-industry approach for economic and optimized sampling procedures of damaged components out of the field does not exist. The Chair of Safety engineering / Risk Management at the University of Wuppertal - in cooperation with manufacturers of the automotive industry - developed the Optimized Multi-Stage Sampling Procedures (OMSP) concept. The OMSP concept is a proposal regarding to analyzing standard for the automotive industry. Key aspects of the OMSP concept are as follows: (1) Early identification and analysis of critical failure modes with reduced amount of analyzed damaged components but with similar detection rate and resolution accuracy (2) Reducing costs of the technical analysis reduce the scope of analyzed damaged components with comparable detection rate of critical failure modes (3) Deselection of selected data areas to reduce the amount of data and the request of considered damaged components allows the verification of the reliability of technical component changes while drawing constant sample sizes (4) Recognition of critical failure modes allows targeted actions for troubleshooting, for example in the field or in the current product generation (5) Integration of the OMSP concept in the FDA process allows the verification of the potential failure modes detected by the statistical reliability analysis This paper outlines the effectiveness and use of the OMSP concept in a near reality case study of the automotive industry. The focus of the case study is the analysis of a shift by wire actuator module (consisting of an electric motor and electrical control unit) including different failure modes. The application of the OMSP concept shows the following essential
It has been said that one person's signal is another person's noise, or alternatively, one engineer's ground clutter is another engineer's object of interest. Vegetation observed with spectral imagers can be regarded either way. Some users of spectral data (multispectral or hyperspectral) want to study the plants in the pixels, and others want to use characteristics of the plants to move them out of the way of more interesting things. This is a difficult problem in either case in that vegetation is not time-static (spectral reflectance changes with seasons) and vegetation in general has smaller spatial scale than spectral data sets do. One approach is to model the reflectance or other properties of plants at pixel scale, using various observed data and mathematical models to span both wavelength and spatial dimension. Data collection at the leaf/branch level is known to be costly in labor, travel, and equipment and fraught with pitfalls for the field operator. Models for scaling up from plant data to larger spatial dimensions also have assumptions and built-in limitations. One big problem is to identify and avoid sources of error and variance which can cause a collection's data to be suspect. Remotely-sensed spectral data have been around for decades, and research on uses of these data continues even as commercial and government agencies use multispectral and hyper spectral sets to identify species cover in forests, monitor the health of crops, seek mineral deposits, track pollution, and monitor the condition of roadways. Those who use spectral images get the data from sources like LANDSAT, ASTER, and SPOT, or newer systems such as AVIRIS, MODIS, and HYPERION. The technical challenge is to aggregate, upward, the knowledge collected at the leaf level, to enable people to make inferences at the pixel level. I will discuss some of these published efforts in more detail. There are several issues at each level in this aggregation, including calibration, drift, illumi- - nation, and sample integrity. In measuring leaf reflectance, there are several protocols dependent on the measuring equipment. The instrument also needs to be calibrated and controlled for drift. Deterioration of the sample being measured will occur. Illumination, if solar, will vary as measurements proceed. The same issues occur in the greenhouse/laboratory setting. We will explore several of these sources of non- representative spectra. Researchers are attempting to bridge the gap from leaf to pixel using canopy reflectance models. A few widely accepted models are SAILH and PROSPECT, and their variants. (Canopy models are also used for estimating biometric properties over large areas).
Developed over the course of more than six years, the NEPTUNE Canada network being implemented by University of Victoria (UVic) with funding from the Canada Foundation for Innovation and the British Columbia Knowledge Development Fund will be the first ocean observatory to link a wide variety of deepwater science instruments with the global Internet. The development of NEPTUNE Canada poses many challenges. The network must deliver high bandwidth and high power, while being manageable and reliable; the entire system, from instruments on the seabed to access points for science users and the public, must be designed to support continuous collection and real time distribution of time seriesdata Many innovations related to the wet plant were needed to meet these requirements. The design selected required, amongst other things, the qualification of submarine repeaters and branching units capable of managing the higher currents; adaptation of terrestrial Ethernet switches and transmission equipment for installation in underwater housings; development of a 10 kV to 400 V DC-DC converter in an underwater housing capable of supplying 10 kW; design and manufacture of a distribution system, including junction boxes and extensions, to cost-effectively and reliably connect instruments to the network; and outfitting and upgrading of a shore station to provide reliable power and communications from the outside world. Some of these developments have been challenging, others straightforward. All of them have provided valuable experience, both for the NEPTUNE Canada team, UVic and Alcatel-Lucent, and for anyone wishing to implement or use a cabled observatory. The NEPTUNE Canada project is well under way. It received funding in January 2004; a contract with Alcatel-Lucent was entered in October 2005; cable manufacture was completed in early 2007 and the backbone cable ring will be installed in summer 2007. The backbone ring includes repeaters, branching units and spur cables, as well- as the shore landings and the ploughed sections. A report on the current project status, including the outcome of the summer 2007 work, will be presented, and the experience gained during that installation will be discussed. This paper will also provide insight as to how NEPTUNE Canada and its contractor, Alcatel-Lucent, have addressed the development of this equipment. Policies will be discussed, with particular examples of the effect of these policies on engineering , instrument selection and system architecture decisions. Details of engineering solutions, and their implications to both UVic as the owner and to scientists and the general public as users, will be provided and discussed.
engineering for development, production, and other product related company activities are being organized in virtual systems for lifecycle management of product data. This new age of engineering is a result of the continuous development for step-by-step moving of product design, analysis and other production engineering activities from conventional environments to modeling environments during the eighties and nineties. However, more developments were needed in knowledge based decision assistance in order to realize modeling of human intent, increasingly complex product structures, and efficient communication of product data. In recent years, a great change of engineering methodology and software produced program products in knowledge technology and advanced local and global communication. Group work of engineers is increasingly organized around special portals for engineering on the Internet. Although intensive research activities produced outstanding methods in knowledge-based solutions for engineering , these methods have not widespread in the industrial modeling practice. This paper attempts to evaluate the possibility that recent communication and knowledge intensive engineering modeling can be developed into communication and knowledge intensive virtual space technology. Paper starts with a discussion on integrated application of different groups of product modeling techniques. Following this, methods are evaluated for the management of product data (PDM). Next section emphasizes aspects, contexts, and intents as primary issues in modeling for relationships in product data and decisions by engineers. Finally, methods for the management of engineering activities for lifecycle of products are summarized, considering communities of engineers around Internet portals.
Intelligent systems (IS) technologies have received much attention in a wide range of process engineering applications including process operations. Rapid change in applying the latest technologies has become a serious challenge to both management and technical teams. Objects and components are changing the way we relate to our computer and networks and most feel the rate of change will continue to increase. All information technology systems have data and communications tools for personnel. The industrial desktop can be adapted to automate decisions, to intelligently analyze large amounts of data, and to learn from past experiences whether from operators, engineers or managers. They can adapt their desktop according to their domain knowledge, roles, skills and responsibilities. Collaboration between functions (operations, management, maintenance, and engineering ) is enhanced. Access to data and analysis tools enables plant personnel to try new ideas, determine and track the right targets, determine and track the best patterns, and transform and store data andknowledge. The paper presents a description of the data hierarchy and analysis means needed to improve process operations. Continuous improvement with an innovation loop fueled by data collection and analysis methods emerges as the best method for active decision-making and collaboration. A sampling of results include extending sub-critical equipment availability, increased production by faster detection of process bottlenecks and operating cost reduction. Descriptions of these applications are presented
Increasing functionality and complexity of technical products is resulting in complex damage symptoms and causes during customer usage phase. Especially in the automobile industry, complex damage symptoms are often traceable to multiple damage causes. This increases the importance of structured field product observation and field data analysis. For the OEM, the goals are high levels of quality and reliability that can be attained by comprehensive analysis of product performance, design and reliability over the whole product life cycle. This includes: (1) After product launch into the market: Early and accurate identification of possible and actual damage causes by analysis of failures provides knowledge of product failure behavior. This can be attained by analyzing the occurring failures based on small amount of field damage data. The cost implications should also be assessed. (2) Product life cycle: Detailed mapping and analysis of the long-term product failure behavior and reliability These generates the basis for a targeted introduction of further actions, for example rectification of faults in the field, product optimization, or the initiation of concentrated development approaches/-strategies for failure prevention (e.g.: COP strategies). Currently no industrial standard for reliability analysis can fulfill the requirements of a comprehensive reliability analysis over the entire product life cycle. Based on these requirements the FDA concept was developed by the Department of Risk Management and Safety engineering at the University of Wuppertal, in cooperation with OEMs of the automotive industry. Key aspects of the FDA concept are as follows: (1) Integration of several statistical and organizational methods in a comprehensive process (2) Mapping product reliability over whole product life cycle (3) Statistical analysis of failure data - for example, relating to identification of possible damage causes, production batch differences or climatic influences (4) Us- > - > age of qualitative data and information of the value added network regarding potential damage causes that can reduce the costs of the technical analysis (5) Conjunction of the determination of the regress rate and detection of the critical failures (product quality analysis) in a comprehensive approach. Currently different requirements - made by the OEMs - avoid the conjunction, e.g. sampling procedures and sample size (6) Optimization of analysis costs in terms of sampling procedure, sampling analysis and technical analysis with respect to high detection rate of damage causes The paper outlines the process of the FDA concept. Moreover, the paper presents advanced methods (such as the DCD algorithm, WCF approach, OMSP concept and RAW concept; cf. chapter 2.2 and 2.3) and industrial methods (like Weibull distribution and Eckel candidate methods) to ensure the key aspects. Finally, the paper outlines the use of value added networks to gain more information about the damage causes before performing the technical reliability analysis. Therefore the FDA concept supports the mapping, identification and description of complex damage causes of complex components and systems in the automotive industry with the goal of early failure detection in the field and preventive reliability control of subsequent product generations.
NOAA's National Ocean Service's Center for Operational Oceanographic Products and Services (CO-OPS) is responsible for ensuring safe maritime navigation and supporting efficient water-borne commerce. CO-OPS oceanographic and environmental data sets also benefit the National Weather Service, coastal zone managers, and the engineering and surveying communities. In 2006, a new pilot project was introduced and implemented in the Great Lakes to measure currents in real time, horizontally across the Cuyahoga River in Cleveland, and the Maumee River in Toledo, Ohio. This project provides CO-OPS with a variety of new opportunities to expand the National Current Observation Program (NCOP) to the freshwater environment; to enhance partnerships with the Great Lakes shipping community, City of Cleveland, private industry, and federal agencies; and to test a new platform design for a horizontal acoustic Doppler current profiler developed by the U.S. Army Corps ofEngineers, Detroit District. The pilots of the Lake Carriers and Lakes Pilots Associations requested assistance with navigating the Cuyahoga and Maumee Rivers where winds affect their transit through narrow bridge spans and around sharp bends. The real-time current data provides the pilots with advanced knowledge of the conditions to be expected while in transit, and thus affording them the opportunity to load the vessel accordingly before committing to the river. The pilots identified the narrowest channel of the Cuyahoga River at the Center Street swing bridge in Cleveland, Ohio as a place where real time current measurements would provide them with a worst-case scenario of current speed prior to entering the river. Another area identified by the pilots occurs as vessels inbound from Lake Erie must transit through a narrow span of a railroad swing bridge on the Maumee River as they approach the Archer Daniels Midland (ADM) pier to offload their grain. Access to the latest six minute record of current speed and direc- tion at ADM represents conditions just beyond the bridge spans and this allows the pilot to plan an approach through this area. The real time data (every 6 minutes) can be accessed by the public through the web on the Great Lakes Online web page (http://glakesonline.nos.noaa.gov/moncurrent.html). The data are also available via NOAA's Interactive Voice Response System at 301-713-9596. Data collection of current speed and direction started in July 2006 and continues to the present at both sites. A persistent pattern has been noted at the Cuyahoga River site where a seiche oscillates approximately every 1.5 hours unless sustained winds compromise the current flow. Flow is either inbound or outbound (towards the lake) at approximately 0.8 knots. There isn't a real-time water level station upstream of the Center Street swing bridge on the Cuyahoga River so an examination of the variation in volume flow from the downstream NOS water level station located in Lake Erie cannot be simultaneously compared to the current data. A significant feature of the acoustic signal attenuation was noted at the ADM pier last winter when water temperatures approached 32 degrees Fahrenheit. Normal maximum profiling ranges diminished as the temperature dropped and approached freezing. In addition, side lobe interference also increased with the drop in temperature. Other seasonal changes in speed, temperature, and spikes in the data due to wind events are observed as well. This pilot project may include a third horizontal current meter site at the head of the St. Clair River in the vicinity of the Blue Water Bridge in Port Huron, Michigan in late 2007. Since this project is termed a pilot project, the addition of other current meter sites is dependent on follow-on funding from the US Congress or other sources.
A full database supported strategy is presented for the software development of integration design and analysis of engineering productions. The strategy resolves the elements of engineering design and analysis as a solver and five supporters. The solver embraces all the used scientific methods, approaches, and measurements logically, digitally, and numerically. The supporters are databases for supporting the solver. They are Prototype-F for reserving functional, constructive and geometric parameterization prototypes; Material-S for material and structure metallurgical, physical, static mechanical, fatigued, and fractured parameters; Loads for service conditions including loads and environments obtained from expected design, on-line inspection, and numerical simulations; Criteria for code based criteria and the customer's requirements; and History for historic successful and failure events. Any of information in the databases is step-by-step called by the solver to match the process of production design and analysis by an independent and flexible interface. The software is then developed by combining software engineering , data engineering , and knowledge engineering to form an innovation software environment of Re-usability, High efficiency, Flexibility, and adaptability. The software information can be multiplied and strengthened days and months. And the functions can be perfected itself as the time increase. Feasibility has been indicated by the software platform development for China high speed motor unit reliability design and analysis.
The Subjects such as knowledge engineering , pervasive computing, unified communication, ubiquitous sensing and actuation and situation awareness are gaining the most critical and crucial attention from information technology (IT) professionals and pundits across the globe these days in order to accomplish the vision of ambient intelligence (AmI). It is all about effective and round-the-clock gleaning of data and information from different and distributed sources. Secondly whatever is gathered, transmitted, and stocked are being subjected to a cornucopia of tasks such as processing, mining, clustering, classification, and analysis for the real-time and elegant extraction of hidden actionable insights. Based on the knowledge extracted and the needs identified, the final tasks is decide and initiate the next course of actions in time. Not only information, interaction and transaction, but also physical services can be conceived, constructed and supplied to human users with the stability and maturity AmI technologies and instrumented, interconnected, and intelligent devices. This paper gives the detailed description of an AmI application which can provide impenetrable and unbreakable security, convenience, care and comfort for the needy. Our focus here is to develop a secure and safety-critical Ambient Assisted Living (AAL) environment which can monitor the patient's situation and give timely updates. In order to fulfill all these needs, a smart environment has been created to effectively and insightfully control patients' needs. The middleware standard preferred for the development and deployment a bevy of ambient and articulate services is Open Service Gateway Initiative (OSGi).
With the continuous expansion of data availability in many large-scale, complex, and networked systems, such as surveillance, security, Internet, and finance, it becomes critical to advance the fundamental understanding of knowledge discovery and analysis from raw data to support decision-making processes. Although existing knowledge discovery and data engineering techniques have shown great success in many real-world applications, the problem of learning from imbalanced data(the imbalanced learning problem) is a relatively new challenge that has attracted growing attention from both academia and industry. The imbalanced learning problem is concerned with the performance of learning algorithms in the presence of underrepresented data and severe class distribution skews. Due to the inherent complex characteristics of imbalanced data sets, learning from such data requires new understandings, principles, algorithms, and tools to transform vast amounts of raw data efficiently into information and knowledge representation. In this paper, we provide a comprehensive review of the development of research in learning from imbalanced data. Our focus is to provide a critical review of the nature of the problem, the state-of-the-art technologies, and the current assessment metrics used to evaluate learning performance under the imbalanced learning scenario. Furthermore, in order to stimulate future research in this field, we also highlight the major opportunities and challenges, as well as potential important research directions for learning from imbalanced data.
on November of 2010, Microsoft released the Kinect sensor for the Xbox 360 video game console. This device-similar to a webcam-allows an individual to interact with an Xbox 360 or a computer in three-dimensional space using an infrared depth-finding camera and a standard RGB camera. As of January of 2012, over 24 million units have been sold. Using a combination of custom and open-source software, we were able to develop a means for students to visualize and interact with the data allowing us to introduce the concepts and skills used in the field of Electrical and Computer engineering . The unique technological application, visual appeal of the output, and the widespread ubiquity of the device make this an ideal platform for raising interest in the field of Electrical and Computer engineering among high school students.In order to understand the appeal of the Kinect, a working knowledge of the technical details of the device is useful. The novelty and appeal of the Kinect sensor lies in its infrared camera, which is comprised of two distinct devices. An infrared projector sends out a 640x480 grid of infrared beams, and an infrared detector is used to measure how long the reflection of each beam takes to return to the sensor. This data set is known as a “point cloud”. This point cloud is a three-dimensional vector comprised of data points between 40 and 2000, which correspond to distance from the device of each beam. The data in this array can then be parsed to construct a 3d image. The Kinect's infrared camera operates at 30Hz, or 30 samples per second, so the device is able to deliver a frame rate that is sufficient to create the illusion of motion. This allows for the development of applications that give the user a sense of interacting in real time with the image on the screen. The unique visual appeal, novelty of interaction, and relatively easy-to-understand theory of operation make the Kinect an attractive platform for recruitment and outreach- Using the Kinect, a recruiter is able to quickly and effectively demonstrate a range of concepts involving hardware, software, and the design process on a platform that students are familiar with and find appealing. In a short window of time they are able to show examples and explain the fundamental principles of the system while providing tangible, meaningful, and enjoyable interactivity with the device itself. This level of approachability and familiarity is rare among highly-technical fields, and provides an excellent catalyst to develop interest in Electrical and Computer engineering education.
Efficient performance of complex knowledge work is of crucial importance to saving resources in the global economy and long term sustainability. A lot remains to be leveraged in engineering computer-based systems for assisting humans via cognitive and performance aids. The performance ofknowledge-intensive tasks (simply, knowledge-work) involves complex and dynamic interactions between human cognition and multiple sources of information. For achieving efficient healthcare for patients, a knowledge work Support System focused in the biomedical domain needs complete access to domain information in order to offer correct and precise data to a knowledge worker. The collection of this data and their interrelationships can be automated by gleaning the necessary knowledge from Linked Open Data (LOD) sets available on the Internet. Because LOD sets are interlinked, populating a KwSSs knowledge base with their informational content allows the system to store the relationships among various biomedical concepts, thereby making it a more active consumer of knowledge and improving its ability to aid in any given setting. This paper explores the utility and completion of LOD sets for a KwSS focused on the biomedical domain. In particular, two types of LOD sets are examined: domain-specific (e.g. Dailymed, DrugBank) and general-context (e.g. DBpedia, WordNet). More specifically, this paper investigates the structure of the available data, the extent to which such interlinked data can provide the knowledge content necessary for fulfilling tasks and activities performed in the biomedical domain (e.g. the patient-doctor setting), and how an individual can potentially access this data.
International ConferenceMeasures of text similarity have been used for a long time in applications in natural language processing and related areas such as text mining, Web p- age retrieval, and dialogue systems. Existing methods for computing sentence similarity have been adopted from approaches used for long text documents. These methods process sentences in a very high-dimensional space and are consequently inefficient, require human input, and are not adaptable to some application domains. This paper presents a method for measuring the semantic similarity of texts, using corpus-based and knowledge-based measures of similarity. The semantic similarity of two sentences is calculated using information from a structured lexical database and from corpus statistics. The use of a lexical database enables our method to model human common sense knowledge and the incorporation of corpus statistics allows our method to be adaptable to different domains. The proposed method can be used in a variety of applications that involve text knowledge representation and discovery. Experiments on two sets of selected sentence pairs demonstrate that the proposed method provides a similarity measure that shows a significant correlation to human intuition.
Daryl Pregibon - Google Inc. Research Scientist states that: “Data Mining is a mixture of statistics, artificial intelligence and database research.” In other words, the purpose of this process is the automatic discovery of knowledge hidden in data using various computational techniques. The purpose of this work is represented by the analysis of the impact of GRID technology for storing and processing large amounts of information and knowledge. Using computational power of computers and the most effective means of working with data, information exploitation is no longer a difficulty. It shows a strong expansion of the use of GRID technologies in various fields, as a consequence of the development of our society and, in particular, of the scientific and technical world that require technologies that allow all parties to use resources in a well-controlled and well organized way. Therefore, we can use GRID technologies for Data Mining processing. To see what the data “mining” process consist of, we must go through the following steps: construction and validation of the model and application of the model to newdata. GRID - Data Mining connection can be successfully used to monitor environmental factors in environmental protection field, in civil engineering field to monitor the behavior in time, in medical field to determine diagnoses, in telecommunications. To be able to develop “mining” applications of the distributed data within a GRID network, the infrastructure that will be used is the Knowledge GRID one. This high level infrastructure has an architecture dedicated to data “mining” operations and specialized services for resource discovery stored in distributed deposits, information services management. In this concept, the achievement of data storage and processing is one of the most effective ways one can obtain results with high accuracy, according to initial requirements, using the automated know- edge discovery principles from the entire resource of knowledge existing in different systems. We can say that the main benefit obtained by using Knowledge GRID architecture is a major improvement in the execution speed of the “mining” process.
Knowledge capture is an important key in a business world where huge quantities of data are available via the Internet. Knowledge, as usable information, is a necessary element in the success of any organization. The recent growth of online information available in the form of academic paper related to algorithm and tool of Thai word segmentation distributed in various web sites, however it has not been organized in a systematic way. Thus, this study tries to propose a knowledge capture methods to support knowledge management activities. To perform the objectives of the study, knowledge engineering techniques take a very important role in the knowledge capture process in various ways such as to build knowledge model, to simplify access to the information their contain and better ways to represent the knowledge explicitly. In this study, many knowledge engineering methods have been compared to select a suitable method to be applied to solve the problem of knowledge capture from academic papers; i.e. SPEDE, MOKA and CommonKADS. The CommonKADS methodology is selected because it provides sufficient tools such as a model suite and templates for different knowledge intensive tasks. However, creating and representing knowledge model create difficulties toknowledge engineer caused the ambiguity and unstructured of the source of knowledge. Therefore, the objectives of this paper are to propose the methodology to capture knowledge for academic papers by using the knowledge engineering approach. The academic papers which content related to algorithm and tools of Thai word segmentation are used as a case study to demonstrate the proposed methodology.
With converging of electronics and communication technologies and the integration of voice, data and images has made possible the penetration of information technology to play a major role in human resource re-engineering in the knowledge networked environments. The whole scenario synergises into the concept of providing education or learning on demand and leveraging information and expertise to improve organizational innovation, responsiveness, productivity and competency. The human resource re-engineering assumes greater significance in the new millennium with knowledge management providing a catalytic tool in involving, acquiring, creating, and packaging, distributing, applying and maintaining knowledge databases. This paper deals with certain components of knowledge management emphasizing knowledge networking concepts by means of working out strategic partnerships/alliances with leading organizations which will enable new paradigms for assessing and measuring country's economic empowerment in the totally network global economy. The salient features described in this paper also cover knowledge management, knowledge categories, knowledge types and strategic business objectives and knowledge management and collaboration/alliances/partnerships playing a leading role in the years to come for making one world economy to be predominantly a network and knowledge dependent
Knowledge management plays important role for personalized service in e-commerce. However, the incompleteness of knowledge has degraded knowledge collaboration in the context of e-commerce. This paper has addressed issues of knowledge acquisition for servicing user with needed information and product through agent technology and fuzzy ontology. Seller agent and buyer agent was constructed in this paper,which solve knowledge acquisition and utilization. In the end, the framework ofknowledge management has been implemented to cut out knowledge application, and validation of the framework was verified by related empirical data.
Nowadays, most manufacturing industries are in confronted by rapidly changing of market requirements and tremendous pressure of technology developments. As the product development requires shorter lead-time and faster response for the market demand, closed communication and collaboration is required. In traditional way of engineering , each system is commonly operated in isolated environment, and efficient communication with each other is difficult to achieve. This causes waste of investment on the hardware, software, human resources, and also causes inefficiency in the product development and production. PLM(Product Lifecycle Management) and new IT(Information Technology) are the most powerful approaches to solve these problems. PLM is one of innovative manufacturing paradigms which leverage e-business technologies to allow a companypsilas product content to be developed and integrated with all company business process through the extended enterprise. For the successful achievement to PLM, the management, creation, collaboration of all product and manufacturing related information is essential, and the exchange of product information is core part of PLM business process. This paper proposes the methodology and prototype system for product information exchanges and integrated managements in automotive supply network. This approach architected by using ooCBD(object oriented component-based development) which can analyze and refine from the use case model to implementation test. Then, we define a BOM(Bill of Material) neutral file that is XML(eXtensible Mark-up Language) format based on PLM Services standard. PLM Services is an articulate PLM data model schema to exchange product information. Eventually, we implement the software modules to support data exchange between commercial PDM(Product Data Management) systems of diverse suppliers and CAD system. In this research, object CAD/PDM Systems are Team center engineering , SmarTeam and CATIA. Using this product information exchanges and integrated management in engineering collaboration portal and suggested neutral file for BOM, manyengineers in supply network can share their information and knowledge in a fast, integrated and convenient way.
This paper presents a method to evaluate air flow effectiveness of both traditional raised floor designs and non-raised floor air-conditioning designs for data centers. Metrics are developed that will easily permit owners, engineers and operators to measure and quantify the performance of their data center air distribution systems or changes that they make to their cooling systems to improve air management and hence cooling system efficiency. The metrics incorporate and integrate together the major factors that decrease the effectiveness of computer room air cooling. These metrics, which are covered in the paper, include; negative pressure flow rate (air induced into the floor void), bypass flow rate (from floor void directly back to the air-conditioners without cooling servers), recirculation flow rate (from server outlet, back into server inlet) and the balance of CRAC and server design flow rates. Examples of the application of these metrics are also presented. Additionally, benchmarking data of bypass and recirculation collected from over 60 data centers during energy audits are presented. The benchmarking data clearly identifies potential energy saving opportunities when compared to ideal (no bypass and no recirculation). Consequently, one can use the benchmarking data to compare a given data center to others and to measure progress in reducing recirculation and bypass levels as energy conservation measures and best design practices are implemented in the data center. The methodology presented in this paper offers the advantage to establish quick understanding of the air management in the datacenter in fairly short amount of time using internal resources once the presented guidelines and lessons learned are followed. Performing CFD, which may be required in some cases, requires specialized engineers with sufficient knowledge in data center air flow paths, racks and server types, cooling equipment and power distribution units in order to build reliable air flow- - models using any of the commercially available CFD packages. The aforementioned requirements render the CFD simulation an expensive service. Hence, CFD simulation should be used when air management analysis falls short, example of such cases include planning for future IT growth using specific hardware in specific location.
The most important element of industrial software development is the creation of a common vocabulary of terms for exchanging information between software and industrial engineers. Based on this cooperation, technical domain knowledge is converted into data structures, algorithms and rules. Currently, when people are used to receiving short and quick messages, the most efficient way ofknowledge extraction is work on examples or mockups to facilitate better understanding of the problem. Shorter rounds in the presentation of mockups allows continuous work on live object models rather than specifications which make experts more open for sharing their knowledge and provides quicker and more reliable feedback on the data structure and the completeness of the model. Latest research and progress in the area of Model Driven Architecture (MDA) resulted in advanced tools for the creation of models, automatic source code generation as well as whole frameworks for creating application skeletons based on these models. In this paper a collaborative process which uses MDA approach (model, tools and frameworks) for extracting knowledge from domain experts is presented. During presented process, a cooperation of a software engineer and a domain expert via phone calls and one live workshop resulted in a complete model of machine and drive including specific machine features and diagnostic processes. Finally, a working diagnostics application was verified by the domain expert proving that MDA resulted in the expected results. The diagnostics application was verified on real data collected on the winding machine for more than one month, collected diagnostics data included more than 150 signals and 20Gb of raw analog data to dig into before getting condensed diagnostics results. Additionally to the process itself, the article presents identified risks, benefits from applying the MDA approach and lessons learned from applying this new innovative process. For further work, the possibilities of extending and dynamically extending existing models should be studied. In previous works we have focused on an ontology based approach, which does not meet all expectations when it comes to application in real world environment. As simpler and more mature technology, MDA was shown to be more productive and easier to adapt for building industrial applications.
Recently geospatial science and engineering has been experiencing a paradigm shift from having all data and computing resources owned locally to having them shared over the Web. This has brought both benefits and challenges to current Geospatial science and engineering education. GeoBrain, funded by NASA, has great insights to this new trend and establishes a unique Web-based data, information and knowledge system by adopting and developing latest Web service and knowledge management technologies. This system has cutting-edge capabilities in geospatial data discovery, visualization, retrieval and analysis and provides unprecedented online data, services, and other computing resources. This paper addresses the GeoBrain online resources for supporting college-leveldata-intensive geospatial science and engineering education by presenting GeoBrain Data Download with interoperable, personalized, on-demand data access and services (IPODAS) and the integrated GeoBrain Online Analysis System (GeOnAS).
The Naval Air Systems Command (NAVAIR) produces and supports highly complex aircraft weapons systems which provide advanced capabilities required to defend U.S. freedoms. Supporting said complex systems such as the MV-22/CV-22 aircraft requires being able to troubleshoot and mitigate complex failure modes in dynamic operational environments. Since an aircraft is comprised of multiple systems designed by specialty sub-vendors and subsequently brought together by an aircraft integrator, diagnostics at the aircraft level are usually “good enough” but not capable of 100% fault isolation to a single component. Today's system components must be highly integrated and are required to communicate via high speed data-bus conduits which require precise synchronization between systems. Failure modes of aircraft are identified via design, analysis and test prior to fielding of the weapon system. However, not all failure modes are typically known at the time of system Initial Operational Capability, but rather are found in the field by maintainers/pilots and then subsequently mitigated with aircraft engineering changes or system replacements. Also, the requirement for increased capabilities can drive the need for new systems to be integrated into an aircraft system that may not have been considered in the initial design and support concept. There is a plethora of maintenance action detail collected by pilots, maintenance officers (MO) and engineers that can and should be used to identify failure mode trends that come to light during the operational phase of an aircraft. New troubleshooting techniques can be developed to address underlying failure modes to increase efficiency of future maintenance actions thus reducing the logistics trail required to support the aircraft. The elements available for analysis are maintenance results input by the MO/pilot, (including free form comments regarding problems and resulting actions), Built-In-Test (BIT) fault codes recorde- - d during a flight, and off-aircraft test equipment (such as Consolidated Automated Support System CASS) historical test results. The Integrated Support Environment (ISE) is collecting the data required to perform analysis of underlying maintenance trends that can be identified using some specialized software data mining tools such as text mining of corrective action and maintainer comments datafields from maintenance results. The findings or knowledge extracted from text mining can be correlated back to fault codes recorded during flight and historical maintenance results to help mitigate issues with broken troubleshooting procedures causing headaches to the our Sailors and Marines in the field. By tagging key phrases from the maintainer's/pilot's remarks, knowledge can be gleaned into how the aircraft fails in vigorous environments. The premise of this research is to first choose an apparent high failure avionics system on the V-22 aircraft that is experiencing a high removal rate from the aircraft but subsequently found to be fully operational when tested on CASS. The results of this analysis should present potential root causes for “Cannot Duplicate” situations by recommending an augmentation of diagnostics at the aircraft level to avoid removing and replacing a system that has not failed even though it has reported bad via the aircraft diagnostics. This research will utilize the Net-Centric Diagnostics Framework (NCDF) to retrieve past Smart Test Program Set (TPS) results/BIT sequence strings as a variable for identifying trends in V-22 aircraft maintenance actions. The results of the research will be socialized with the V-22 avionics Fleet Support Team and the Comprehensive Automated Maintenance Environment Optimized (CAMEO) for validation of findings before any troubleshooting changes are recommended. If required, the Integrated Diagnostics and Automated Test Systems group will perform an engineering analysis of problem and suggest
Product and service innovation has become an increasingly complex process, requiring knowledgefrom a wide range of sources both external and internal to the organization. The purpose of this research is to identify the knowledge that software/service engineers-in comparison to hardwareengineers- need to retrieve from past projects and for future projects and to identify enablers and barriers to knowledge management. A quantitative study of Japanese engineers was conducted, and the accumulated data was analyzed using text-mining and regression analysis. Results showed that software/service engineers value knowledge related to specifications and development, while design/production/maintenance engineers attach more importance to knowledge related to technology, design, cases, customers and competitors. There was also a difference in the amount of time spent onknowledge management activities, which was significantly lower for software/service engineers. And although intention emerged as the most important enabler of knowledge management overall, a combination of individual and organizational factors were found to hinder knowledge management activities. These findings suggest that software/service engineers and design/production/maintenance engineers have different requirements and perceptions with regards to knowledge management, and that firms need to motivate engineers by providing distinct organizational contexts specific to their engineering needs.
New information and communication technologies may be useful for providing more in-depth knowledgeto students in many ways, whether through online multimedia educational material, or through online debates with colleagues, teachers and other area professionals in a synchronous or asynchronous manner. This paper focuses on participation in online discussion in e-learning courses for promoting learning. Although an important theoretical aspect, an analysis of literature reveals there are few studies evaluating the personal and social aspects of online course users in a quantitative manner. This paper aims to introduce a method for diagnosing inclusion and digital proficiency and other personal aspects of the student through a case study comparing Information System, Public Relations and Engineeringstudents at a public university in Brazil. Statistical analysis and analysis of variances (ANOVA) were used as the methodology for data analysis in order to understand existing relations between the components of the proposed method. The survey methodology was also used, in its online format, as a research instrument. The method is based on using online questionnaires that diagnose digital proficiency and time management, level of extroversion and social skills of the students. According to the sample studied, there is no strong correlation between digital proficiency and individual characteristics tied to the use of time, level of extroversion and social skills of students. The differences in course grades for some components are partly due to subject “Introduction to Economics” being offered to freshmen in Public Relations, whereas subject “Economics in engineering ” is offered in the final semesters of engineering and Information Systems courses. Therefore, the difference could be more tied to the respondent's age than to the course. Information Systems students were observed to be older, with access to computers and Internet at the work- lace, compared to the other students who access the Internet more often from home. This paper presents a pilot study aimed at conducting a diagnosis that permits proposing actions for information and communication technology to contribute towards student education. Three levels of digital inclusion are described as a scale to measure whether information technology increases personal performance and professional knowledge and skills. This study may be useful for other readers interested in themes related to education in engineering .
Online engineering education tools present students with flexible access to local/distance learning resources and offer an opportunity to maintain student engagement via the use of dynamic interfaces. This paper addresses lessons learned from the creation and use of online homework generation modules in an electrical engineering signals and systems course. The nine modules address complex number calculations, complex conversions, signal graphing, zero input response, unit impulse response, Fourier series, and fast Fourier transforms. The primary goal was to create an innovative and engaging set of online learning experiences that would allow faculty to assess the transfer of mathematical knowledge from calculus and differential equations courses to subsequent electricalengineering courses. These modules offer student specific problem generation and automatic grading, where the latter accelerates the feedback cycle and provides tool scalability to large numbers of students. The tools are easily upgradable and offer the opportunity to track, through a database, elements of the student learning process that often go unrecorded but yield a rich data set for correlating performance on related subjects in current, previous, or subsequent semesters. The modules have been employed nine semesters to date, and student survey data from these experiences supplement data stored in the database files and data recorded from written examinations. Student reactions to these tools have been generally positive, where the ease of answer entry plays a large role in the experience. Quantitative correlations between module scores, grades on written examinations, and performance in previous mathematics courses have demonstrated variable clarity, but qualitative assessments of the technology-facilitated environment point to a clear increase in student learning and engagement. Instructor benefits are apparent with regard to grading time saved, grading consistency, confidence in student acco- - untability for work submitted, and information regarding when/where students work that is difficult to obtain any other way.
Summary form only given. In the light of the ongoing activities related to the “smart grid”, the perception that today's utility workforce is woefully inadequate and not up to par has gained considerable bite. As the discussions around smart grid have evolved, the industry, espoused by the US-DOE, has reached a consensus and formally defined the requirements for tomorrow's smart grid. Yet, it is not clear what exactly is expected of tomorrow's workforce. Should there be an emphasis on power system engineering , communication technologies, green generation technologies, basic sciences, hardware, software, engineering management, or plain vanilla “soft” skills? Present trends in the industry including the aging asset base, aging workforce, integration of renewable and sustainable resources - all in the background of Smart Grid, make the discussions around what is expected of the future utility work force even more complicated. University faculty and industry personnel are trying to figure out the answers to the questions above and common themes are slowly emerging as there is no definitive consensus on the expected needs of the future utility workforce. Even so, the concept of holistic education and critical thinking seem to be recurring themes at all levels. Consensus about the importance of Information and Communication Technology (ICT) is another point of convergence. Implementation of ICT in various business processes and the effective breaking down of silos is an underlying theme in leveraging critical data out of which meaningful information, andknowledge, may be extracted to effect local change but with global consequences. This paper provides a critical view of the gaps that need to be bridged to transition today's utility workforce in the wake of tomorrow's anticipated smart grid needs. Advertised utility positions, such as Manager, Smart Grid and Technology Integration Strategy and Systems Engineer, Power Systems & - mart Grid will be examined in detail as a means to develop a better sense of understanding of what, and how, utilities are doing to bridge the perceived gaps in today's and tomorrow's utility workforce. Efforts at leading universities, technical schools and government agencies (such as CEWD and NSF) are described as well. Certificate programs and new course curricula addressing a more holistic approach are presented. Another important aspect of these efforts is the way in which these new courses are delivered as technology has changed the way in which workers may be trained, or retrained, using the concepts of on-line and long-distance learning programs. These “learn at your own pace” courses are gaining popularity along with more traditional time-bound courses delivered in a class-room setting as more and more people are being drawn towards a career in the “electric” energy business.
Summary form only given. Health services are today facing a number of interacting challenges. It is generally accepted that major, even radical, changes are required in health services to deliver better health outcomes at an affordable cost. IT is a change driver both in the horizontal and vertical directions. Horizontally we see the emergence and extension of a generic IT infrastructure that eventually will allow anytime, anywhere access to information and knowledgeand context aware decision making. In the vertical direction IT enables a more effective and efficient health service environment. Standards and guidelines for an interoperable electronic patient record (EPR) have been developed and are today endorsed by industry and governments. Several countries are investing into a nation-wide "backbone" for health information and EPR exchange. The creation of the "backbone" allows separating data from processes. It's the processes (services) where the biggest potential and challenge is for improving the effectiveness and efficiency of health services. Another change driver is the expansion of (traditional) healthcare into a "health continuum" with the aim to engage and empower people to proactively manage their health and illness. This extends to the ageing people that can be assisted with tools and services to remain independent and integrated with the society. A third driver is the integration of medicine, life sciences, and engineering and physical sciences. These drivers point towards an application landscape that is characterized by "3 P's": pervasive, personal and personalized. In the keynote, we define what these three P's contain and how they relate to each other.
The Master DKE delivers in-depth knowledge and competences on Data & Knowledge engineering , one of the most promising career areas for ambitious computer scientists. Its subject area is "engineering " for Data and for Knowledge, aiming to turn passive data into exploitable knowledge: It focuses on the representation, management and understanding of data and knowledge assets. It encompasses technologies for the design and development of advanced databases, knowledge bases and expert systems, methods for the extraction of models and patterns from conventional data, texts and multimedia, modelling instruments for the representation and updating of extracted knowledge. The Master DKE can be studied on German or English and is thus open to students mastering either of the two languages.DKE spans application areas ranging from business intelligence and market watches to life sciences, biotechnology and security. It builds upon advances in networked services, people and agent communication, in decision support, information systems and management. The management and analysis of data, the maintenance and understanding of knowledge assets are of major importance in business venues, in governmental authorities and in non-profit organizations. In the Master DKE, Knowledge engineers in large institutions like banks, medical centers, joint ventures and holdings but also in small and medium size enterprises - Project managers for interdisciplinary projects that demand data-intensive solutions IT-consultants, specialized in the design and development of knowledge-intensive scenario; application areas include e-business, biotechnology and customer relationship management Researchers on information systems, intelligent systems and their many application areas The Master DKE also provides the foundations for further studies towards a However, the overclaim of privileges is widespread in emerging applications, including mobile applications and social network services, because the applications’ users involved in policy administration have little knowledge of policy based management. The overclaim can be leveraged by malicious applications, then lead to serious privacy leakages and financial loss. To resolve this issue, this paper proposes a novel policy administration mechanism, referred to as Collaborative Policy Administration (CPA for short), to simplify the policy administration. In CPA, a policy administrator can refer to other similar policies to set up their own policies to protect privacy and other sensitive information. This paper formally defines CPA, and proposes its enforcement framework. Furthermore, in order to obtain similar policies more effectively, which is the key step of CPA, a text mining based similarity measure method is presented. We evaluate CPA with the data of Android applications, and demonstrate that the text mining based similarity measure method is more effective in obtaining similar policies than the previous category based method. Design objects in CAD applications have versions and participate in the construction of other more complex design objects. The author describes data model aspects of an experimental database system for CAD applications called Pegasus. The model is based on previously published work on extensible and object-oriented database systems. The novel idea of Pegasus is the reconciliation of two subtyping (inheritance) mechanisms: the first, called refinement, is based on the usual semantics of schema copying; the second, called extension, is based on the inheritance semantics between prototypes and their extensions. The author uses these modeling elements to show how generic and version objects as well as component occurrences of (generic or version) components can be modeled Summary form only given. Health services are today facing a number of interacting challenges. It is generally accepted that major, even radical, changes are required in health services to deliver better health outcomes at an affordable cost. IT is a change driver both in the horizontal and vertical directions. Horizontally we see the emergence and extension of a generic IT infrastructure that eventually will allow anytime, anywhere access to information and knowledge and context aware decision making. In the vertical direction IT enables a more effective and efficient health service environment. Standards and guidelines for an interoperable electronic patient record (EPR) have been developed and are today endorsed by industry and governments. Several countries are investing into a nation-wide "backbone" for health information and EPR exchange. The creation of the "backbone" allows separating data from processes. It's the processes (services) where the biggest potential and challenge is for improving the effectiveness and efficiency of health services. Another change driver is the expansion of (traditional) healthcare into a "health continuum" with the aim to engage and empower people to proactively manage their health and illness. This extends to the ageing people that can be assisted with tools and services to remain independent and integrated with the society. A third driver is the integration of medicine, life sciences, and engineering and physical sciences. These drivers point towards an application landscape that is characterized by "3 P's": pervasive, personal and personalized. In the keynote, we define what these three P's contain and how they relate to each other. Summary form only given. In the light of the ongoing activities related to the “smart grid”, the perception that today's utility workforce is woefully inadequate and not up to par has gained considerable bite. As the discussions around smart grid have evolved, the industry, espoused by the US-DOE, has reached a consensus and formally defined the requirements for tomorrow's smart grid. Yet, it is not clear what exactly is expected of tomorrow's workforce. Should there be an emphasis on power system engineering , communication technologies, green generation technologies, basic sciences, hardware, software, engineering management, or plain vanilla “soft” skills? Present trends in the industry including the aging asset base, aging workforce, integration of renewable and sustainable resources - all in the background of Smart Grid, make the discussions around what is expected of the future utility work force even more complicated. University faculty and industry personnel are trying to figure out the answers to the questions above and common themes are slowly emerging as there is no definitive consensus on the expected needs of the future utility workforce. Even so, the concept of holistic education and critical thinking seem to be recurring themes at all levels. Consensus about the importance of Information and Communication Technology (ICT) is another point of convergence. Implementation of ICT in various business processes and the effective breaking down of silos is an underlying theme in leveraging critical data out of which meaningful information, andknowledge, may be extracted to effect local change but with global consequences. This paper provides a critical view of the gaps that need to be bridged to transition today's utility workforce in the wake of tomorrow's anticipated smart grid needs. Advertised utility positions, such as Manager, Smart Grid and Technology Integration Strategy and Systems Engineer, Power Systems & - mart Grid will be examined in detail as a means to develop a better sense of understanding of what, and how, utilities are doing to bridge the perceived gaps in today's and tomorrow's utility workforce. Efforts at leading universities, technical schools and government agencies (such as CEWD and NSF) are described as well. Certificate programs and new course curricula addressing a more holistic approach are presented. Another important aspect of these efforts is the way in which these new courses are delivered as technology has changed the way in which workers may be trained, or retrained, using the concepts of on-line and long-distance learning programs. These “learn at your own pace” courses are gaining popularity along with more traditional time-bound courses delivered in a class-room setting as more and more people are being drawn towards a career in the “electric” energy business.