KaaShiv InfoTech, Number 1 Inplant Training Experts in Chennai.
Visualization is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes.Visualization today has ever-expanding applications in science, education, engineering (e.g., product visualization), interactive multimedia, medicine, etc. Typical of a visualization application is the field of computer graphics. The invention of computer graphics may be the most important development in visualization since the invention of central perspective in the Renaissance period. The developmentof animation also helped advance visualization.
In this current trend, applications require quick and easy access to wide spread. Hence data's are to be stored in data center to provide centralized access. The existing static resource allocation in collaboration with Virtual Machine technique does not suffice dynamic nature’s application. Hence an optimized resource allocation model is proposed. The proposed two-tiered system is designed such that, it serves data requirements of both static and dynamic applications either locally or through global resource allocation strategy, on demand. The local resource allocator achieves high performance by effective adjustment of CPU Time-slots and efficient memory management and the global allocator achieves the same by managing the applications installed in local scheduler. This way it minimizes the wastage of resources and also ensures performance of critical application when there is a high level of resource utilization.
In a shared virtual computing environment, dynamic load changes as well as different quality requirements of applicationsin their lifetime give rise to dynamic and various capacity demands, which results in lower resource utilization and application qualityusing the existing static resource allocation. Furthermore, the total required capacities of all the hosted applications in current enterprise data centers, for example, Google, may surpass the capacities of the platform. In this paper, we argue that the existing techniques by turning on or off servers with the help of virtual machine (VM) migration is not enough. Instead, finding an optimized dynamic resource allocation method to solve the problem of on-demand resource provision for VMs is the key to improve the efficiency of data centers. However, the existing dynamic resource allocation methods only focus on either the local optimization within a serveror central global optimization, limiting the efficiency of data centers. We propose a two-tiered on-demand resource allocation mechanism consisting of the local and global resource allocation with feedback to provide on-demand capacities to the concurrent applications. We model the on-demand resource allocation using optimization theory. Based on the proposed dynamic resource allocation mechanism and model, we propose a set of on demand resource allocation algorithms. Our algorithms preferentially ensure performance of critical applications named by the data center manager when resource competition arises according to the time-varying capacity demands and the quality of applications. Using Rainbow, a Xen-based prototype we implemented, we evaluate the VM-basedshared platform as well as the two-tiered on-demand resource allocation mechanism and algorithms. The experimental results show that Rainbow without dynamic resource allocation (Rainbow-NDA) provides 26 to 324 percent improvements in the application performance, as well as 26 percent higher average CPU utilization than traditional service computing framework, in which applications use exclusive servers. The two-tiered on-demand resource allocation further improves performance by 9 to 16 percent for those critical applications, 75 percent of the maximum performance improvement, introducing up to 5 percent performance degradations to others, with 1 to 5 percent improvements in the resource utilization in comparison with Rainbow-NDA.
IT organizations utilize the advantage of virtualization concept by consolidating the server infrastructure by reducing the power, management costs by consolidating the maintenance to a single portion/area by providing a very simpler and more affordable solutions to achieve greater availability, load balancing and disaster recovery. Due to virtualization, the cost benefits are roughly proportional to the consolidation ratios of the available business. As the number of virtual machines in the server increases. Obviously the volume and complexity of I/O traffic will increase a lot. This paves a most challenging environment for the IT organizations, The major issues faced by the companies are, Create effective data access and networking latencies that negatively impact application performance. Introducing I/O bottlenecks that limit the number of VMs that can be hosted per physical server. Existing System normally focused on providing an insight view of the Virtual machines by considering the following factors like, Normalized throughput, Server throughput, Aggregated throughput, Reply time, Transfer time, CPU time per execution, Net I/O per second, CPU utilization, Execution per second Proposed system incorporates all the above said information's. In addition, we try to focus on the basic stuffs like possible errors, User access, network related data transmission, Server reads/writes, Services and all other IO related information's in a optimistic way using Apriori algorithm.
Server consolidation and application consolidation through virtualization are key performance optimizations in cloud-based service delivery industry. In this paper, we argue that it is important for both cloud consumers and cloud providers to understand the various factors that may have significant impact on the performance of applications running in a virtualized cloud. This paper presents an extensive performance study of network I/O workloads in a virtualized cloud environment. We first show that current implementation of virtual machine monitor (VMM) does not provide sufficient performance isolation to guarantee the effectiveness ofresource sharing across multiple virtual machine instances (VMs) running on a single physical host machine, especially when applications running on neighboring VMs are competing for computing and communication resources. Then we study a set ofrepresentative workloads in cloud-based data centers, which compete for either CPU or network I/O resources, and present the detailed analysis on different factors that can impact the throughput performance and resource sharing effectiveness. For example, weanalyze the cost and the benefit of running idle VM instances on a physical host where some applications are hosted concurrently. Wealso present an in-depth discussion on the performance impact of colocating applications that compete for either CPU or network I/O resources. Finally, we analyze the impact of different CPU resource scheduling strategies and different workload rates on the performance of applications running on different VMs hosted by the same physical machine.
The technology of visualization is widely applied in modeling and simulation (M&S). At first, the principle of simulation visualization is introduced. Then aspects of visualization technology applied in M&S are mainly discussed in this paper, that are, modeling visualization, simulation experiment visualization, simulation result analysis visualization and simulation management visualization. The applications cover all the M&S activities. Modeling visualization includes visualization of models and modeling process, model visualization is elaborated in system models, environment models, relationships and interactions between models, simulation experiment visualization consists of experiment original statevisualization and experiment procedure visualization, simulation result analysis visualization is presented with technology of visualization applied in result visible and result analysis and evaluation,visualization management of simulation data and models, experiment design, state stakeout, human computer interaction and manual intervention show the management visualization in simulation.
A great corpus of studies reports empirical evidence of how information visualization supports comprehension and analysis of data. The benefits of visualization for synchronous group knowledge work, however, have not been addressed extensively. Anecdotal evidence and use cases illustrate the benefits of synchronous collaborative information visualization, but very few empirical studies have rigorously examined the impact of visualization on group knowledge work. We have consequently designed and conducted an experiment in which we have analyzed the impact of visualization on knowledge sharing in situated work groups. Our experimental study consists of evaluating the performance of 131 subjects (all experienced managers) in groups of 5 (for a total of 26 groups), working together on a real-life knowledge sharing task. We compare (1) the control condition (novisualization provided), with two visualization supports: (2) optimal and (3) suboptimal visualization(based on a previous survey). The facilitator of each group was asked to populate the provided interactive visual template with insights from the group, and to organize the contributions according to the group consensus. We have evaluated the results through both objective and subjective measures. Our statistical analysis clearly shows that interactive visualization has a statistically significant, objective and positive impact on the outcomes of knowledge sharing, but that the subjects seem not to be aware of this. In particular, groups supported by visualization achieved higher productivity, higher quality of outcome and greater knowledge gains. No statistically significant results could be found between an optimal and a suboptimal visualization though (as classified by the pre-experiment survey). Subjects also did not seem to be aware of the benefits that the visualizations provided as no difference between the visualization and the control conditions was found for the self-reported measures of satisfaction and participation. An implication of our study for information visualization applications is to extend them by using real-time group annotation functionalities that aid in the group sense making process of the represented data.
Conveying a narrative with visualizations often requires choosing an order in which to present visualizations. While evidence exists that narrative sequencing in traditional stories can affect comprehension and memory, little is known about how sequencing choices affect narrative visualization. We consider the forms and reactions to sequencing in narrative visualization presentations to provide a deeper understanding with a focus on linear, 'slideshow-style' presentations. We conduct a qualitative analysis of 42 professional narrative visualizations to gain empirical knowledge on the forms that structure and sequence take. Based on the results of this study we propose a graph-driven approach for automatically identifying effective sequences in a set of visualizations to be presented linearly. Our approach identifies possible transitions in a visualization set and prioritizes local (visualization-to-visualization) transitions based on an objective function that minimizes the cost of transitions from the audience perspective. We conduct two studies to validate this function. We also expand the approach with additional knowledge of user preferences for different types of local transitions and the effects of global sequencing strategies on memory, preference, and comprehension. Our results include a relative ranking of types of visualization transitions by the audience perspective and support for memory and subjective rating benefits of visualization sequences that use parallelism as a structural device. We discuss how these insights can guide the design of narrative visualization and systems that support optimization of visualization sequence.
Focus+context visualization integrates a visually accentuated representation of selected data items in focus (more details, more opacity, etc.) with a visually deemphasized representation of the rest of the data, i.e., the context. The role of context visualization is to provide an overview of the data for improved user orientation and improved navigation. A good overview comprises the representation of both outliers and trends. Up to now, however, context visualization not really treated outliers sufficiently. In this paper we present a new approach to focus+context visualization in parallel coordinates which is truthful to outliers in the sense that small-scale features are detected before visualization and then treated specially during context visualization. Generally, we present a solution which enables contextvisualization at several levels of abstraction, both for the representation of outliers and trends. We introduce outlier detection and context generation to parallel coordinates on the basis of a binned data representation. This leads to an output-oriented visualization approach which means that only those parts of the visualization process are executed which actually affect the final rendering. Accordingly, the performance of this solution is much more dependent on the visualization size than on the data size which makes it especially interesting for large datasets. Previous approaches are outperformed, the new solution was successfully applied to datasets with up to 3 million data records and up to 50 dimensions.
Large-scale visualization systems are typically designed to efficiently “push” datasets through the graphics hardware. However, exploratory visualization systems are increasingly expected to support scalable data manipulation, restructuring, and querying capabilities in addition to core visualizationalgorithms. We posit that new emerging abstractions for parallel data processing, in particular computing clouds, can be leveraged to support large-scale data exploration through visualization. In this paper, we take a first step in evaluating the suitability of the MapReduce framework to implement large-scale visualization techniques. MapReduce is a lightweight, scalable, general-purpose parallel data processing framework increasingly popular in the context of cloud computing. Specifically, we implement and evaluate a representative suite of visualization tasks (mesh rendering, isosurface extraction, and mesh simplification) as MapReduce programs, and report quantitative performance results applying these algorithms to realistic datasets. For example, we perform isosurface extraction of up to l6 isovalues for volumes composed of 27 billion vexes, simplification of meshes with 30GBs of data and subsequent rendering with image resolutions up to 800002 pixels. Our results indicate that the parallel scalability, ease of use, ease of access to computing resources, and fault-tolerance of MapReduce offer a promising foundation for a combined data manipulation and data visualizationsystem deployed in a public cloud or a local commodity cluster.
Narrative visualizations combine conventions of communicative and exploratory informationvisualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations invisualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrativevisualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation.
Flow visualization motivates to a large extent recent research efforts in scientific visualization. The continuous improvement of resources for data generation and analysis allows researchers and engineers to produce large multivariate 3D data sets with improving speed and accuracy. Analyzing and interpreting such datasets without appropriate tools is beyond the capability of the human brain. Scientific visualization and flow visualization in particular aim to provide such tools. The approach we advocate is to follow a visualization process involving data preprocessing, visualization mapping, and rendering. We address the issues related to the second step, namely visualization mappings of vector and tensor data in flow fields. We place this process in perspective to other fields of scientific study by taking the point of view of representation theory. This allows us to classify visualization techniques and to provide a unified framework for analyzing various vector and tensor mappings.
This paper proposes a model of information aesthetics in the context of information visualization. It addresses the need to acknowledge a recently emerging number of visualization projects that combine information visualization techniques with principles of creative design. The proposed model contributes to a better understanding of information aesthetics as a potentially independent research field withinvisualization that specifically focuses on the experience of aesthetics, dataset interpretation and interaction. The proposed model is based on analysing existing visualization techniques by their interpretative intent and data mapping inspiration. It reveals information aesthetics as the conceptual link between information visualization and visualization art, and includes the fields of social and ambientvisualization. This model is unique in its focus on aesthetics as the artistic influence on the technical implementation and intended purpose of a visualization technique, rather than subjective aesthetic judgments of the visualization outcome. This research provides a framework for understanding aesthetics in visualization, and allows for new design guidelines and reviewing criteria.
Weather factors such as temperature, moisture, and air pressure are considered as geographic phenomena distributed continuously in space and without boundaries. Weather factors have field characteristics, meanwhile their data are collected discretely at nodes which are considered as spatial objects. In this article, the model of multivariate cube is employed to visualize the data of weather factors in two modes, object-based visualization and field-based visualization. On a multivariate cube, the 2-D Cartesian coordinate systems representing various factors at a node are embedded in a space-time cube at the position of the node on map plane, where the data of each factor are represented as histogram bars with respect to time. The representation of factors on a multivariate cube supports the object-based visualization and the field-based visualization. The mode of object-based visualization displays the variation of one or more factors over time at one or more nodes, the difference between the values of a factor at various spatial positions, as well as the correlation between various factors at one or more spatial positions at the same time. The mode of field-based visualization displays each factor on layers associated with time. Each factor layer is constituted by converting point data of the factor recorded at nodes to surface data. The mode of field-based visualization approaches the models of stopped process and dynamics to infer surface data from point data. The mode of field-based visualization indicates the value of factors at a certain spatial position, where the mode of object-basedvisualization may be applied to display data similarly to at nodes. The mutual transformation of data between two modes of object-based visualization and field-based visualization on a multivariate cube expands analytical problems from some locations of nodes to every point in space.
It remains challenging for information visualization novices to rapidly construct visualizations during exploratory data analysis. We conducted an exploratory laboratory study in which informationvisualization novices explored fictitious sales data by communicating visualization specifications to a human mediator, who rapidly constructed the visualizations using commercial visualization software. We found that three activities were central to the iterative visualization construction process: data attribute selection, visual template selection, and visual mapping specification. The major barriers faced by the participants were translating questions into data attributes, designing visual mappings, and interpreting the visualizations. Partial specification was common, and the participants used simple heuristics and preferred visualizations they were already familiar with, such as bar, line and pie charts. We derived abstract models from our observations that describe barriers in the data exploration process and uncovered how information visualization novices think about visualization specifications. Our findings support the need for tools that suggest potential visualizations and support iterative refinement, that provide explanations and help with learning, and that are tightly integrated into tool support for the overall visual analytics process.
This paper treats network data visualization using Parallel Coordinates version of Time-tunnel (PCTT) for intrusion detection. Originally, Time-tunnel is a multidimensional data visualization tool and its Parallel Coordinates version provides the functionality of Parallel Coordinates visualization. It can be used for the visualization of network data because IP packet data have many attributes and such multiple attribute data can be visualized using Parallel Coordinates. In this paper, the authors propose the combinatorial use of PCTT and 2Dto2D visualization functionality for the intrusion detection. 2Dto2Dvisualization functionality, whose concept is originally derived from nicter Cube, displays multiple lines those represent four dimensional (four attributes) data drawn from one (2D of two attributes) plane to the other (2D of the other two attributes) plane in a 3D space. This 2Dto2D visualization functionality was introduced to PCTT. Network attacks have a certain access pattern strongly related to the four attributes of IP packet data, i.e., source IP, destination IP, source Port, and destination Port. So, 2Dto2Dvisualization is useful for detecting such access patterns. In this paper, the authors show several network attack patterns visualized using PCTT with 2Dto2D visualization as examples for the intrusion detection.
Knowledge visualization is an emerging field of research which develops on the basis of scientific computing visualization, data visualization and information visualization. In this paper, the concept, formats, and theoretical basis of knowledge visualization are discussed; three kinds of it (concept maps, mind maps, and visual metaphors) and ways of applying them to improve learning effect are commentated. First, two kinds of classification of knowledge visualization are introduced. One distinguished knowledge visualization into six main groups; the other distinguished it into five forms. Then principle of knowledge visualization is analyzed. At last, examples that the three forms of knowledge visualization applied in learning are given, which related to learning effects and reasons of improving learning.
Visualization exploration is the process of extracting insight from data via interaction with visual depictions of that data. Visualization exploration is more than presentation; the interaction with both the data and its depiction is as important as the data and depiction itself. Significant visualization research has focused on the generation of visualizations (the depiction); less effort has focused on the exploratory aspects of visualization (the process). However, without formal models of the process,visualization exploration sessions cannot be fully utilized to assist users and system designers. Toward this end, we introduce the P-Set model of visualization exploration for describing this process and a framework to encapsulate, share, and analyze visual explorations. In addition, systems utilizing the model and framework are more efficient as redundant exploration is avoided. Several examples drawn from visualization applications demonstrate these benefits. Taken together, the model and framework provide an effective means to exploit the information within the visual exploration process.
In visualization, we use the terms data, information and knowledge extensively, often in an interrelated context. In many cases, they indicate different levels of abstraction, understanding, or truthfulness. For example, "visualization is concerned with exploring data and information," "the primary objective in datavisualization is to gain insight into an information space," and "information visualization" is for "data mining and knowledge discovery." In other cases, these three terms indicate data types, for instance, as adjectives in noun phrases, such as data visualization, information visualization, and knowledgevisualization. These examples suggest that data, information, and knowledge could serve as both the input and output of a visualization process, raising questions about their exact role in visualization.
To meet the growing demand of communicating climate science and policy research, the interdisciplinary field of climate visualization has increasingly extended its traditional use of 2D representations and techniques from the field of scientific visualization to include informationvisualization for the creation of highly interactive tools for both spatial and abstract data. This paper provides an initial discussion on the need and design of evaluations for climate visualization. We report on previous experiences and identify how evaluation methods commonly used in informationvisualization can be used in climate visualization to increase our understanding of visualizationtechniques and tools.
Knowledge visualization is an emerging field of research which developed on the basis of scientific computing visualization, data visualization and information visualization. This paper introduces the definition of knowledge visualization and each format of knowledge visualization. Then this paper brings forward the possibility that knowledge visualization can help solve the problems that knowledge is transferred from teachers to students and be rebuilt by students with knowledge visualization in education. Based on Gupta and Govindarajan (2000), five elements can be for a successful transfer, this paper design some strategies which can apply in the process of education to help teachers transfer information to students and students rebuild their own knowledge from the information that their teachers transfer to them, by means of knowledge visualization.
Since novice users of visualization systems lack knowledge and expertise in data visualization, it is a tough task for them to generate efficient and effective visualizations that allow them to comprehend information that is embedded in the data. Therefore, systems supporting the users to design appropriate visualizations are of great importance. The GADGET (Goal-oriented Application Design Guidance for modular visualization EnvironmenTs) system, which has been developed by the authors (1997), interactively helps users to design scientific visualization applications by presenting appropriate MVE (Modular Visualization Environment) prototypes according to the specification of the visualizationgoals expressed mainly with the Wehrend matrix (S. Wehrend & C. Lewis, 1990). This paper extends this approach in order to develop a system named GADGET/IV, which is intended to provide the users with an environment for semi-automatic design of information visualization (IV) applications. To this end, a novel goal-oriented taxonomy of IV techniques is presented. Also, an initial design of the system architecture and user assistance flow is described. The usefulness of the GADGET/IV system is illustrated with example problems of Web site access frequency analysis.
Power system visualization is a technology combining power science with multi-disciplinary science, which integrates with computer graphics, and geography, ergonomics and other disciplines. Power system visualization provides a vivid way for the engineer to study energy management system in a variety of means. This paper presented a framework of visualization platform in power system, around the supporting platform technology and hierarchical display technology of visualization. By comparison, we make decision on graphics development kit. Two-dimension and three-dimension visualizationtechniques are demonstrated in this paper. These visualization techniques enrich power systemvisualization. Hierarchical tree view and multi-theme window display technology are presented for more information. By using this framework of visualization, dispatcher can find problem in time and take appropriate measures. The visualization platform has been applied in the China's smart power grid construction and brings significant social and economic benefits. The visualization platform hugely enhanced dispatch level of power system, and is helpful to transfer traditional dispatching into intelligent dispatching.
We evaluate and compare video visualization techniques based on fast-forward. A controlled laboratory user study (n = 24) was conducted to determine the trade-off between support of object identification and motion perception, two properties that have to be considered when choosing a particular fast-forward visualization. We compare four different visualizations: two representing the state-of-the-art and two new variants of visualization introduced in this paper. The two state-of-the-art methods we consider are frame-skipping and temporal blending of successive frames. Our object trail visualization leverages a combination of frame-skipping and temporal blending, whereas predictive trajectory visualizationsupports motion perception by augmenting the video frames with an arrow that indicates the future object trajectory. Our hypothesis was that each of the state-of-the-art methods satisfies just one of the goals: support of object identification or motion perception. Thus, they represent both ends of thevisualization design. The key findings of the evaluation are that object trail visualization supports object identification, whereas predictive trajectory visualization is most useful for motion perception. However, frame-skipping surprisingly exhibits reasonable performance for both tasks. Furthermore, we evaluate the subjective performance of three different playback speed visualizations for adaptive fast-forward, a subdomain of video fast-forward.
Distribution automation system supervises and manages all the loads, equipments and faults in the area of distribution. Due to a vast area , too many equipments and complex circumstance, the traditional display method can't satisfy the demand of modern distribution automation system. In the traditional distribution automation system we often use one-line diagram, geography diagram, curve drawing and bar chart to display the state of distribution automation system, they're too simple to display the complex and large scale diagrams. It need to be introduced into new techniques to supervise, control and display the running state of distribution automation system. The visualization technique is a new project practical technique which appears along with the computer graphics development in 1990s. It transforms each kind of complex project data as the direct-viewing graph. It's advantageous to people to have a thorough grasp and a correct understanding of the facts. This paper introduces the graph visualization technique, discusses the application of visualization technique in the distribution automation system. In order to analyze data, dig up the connections between data and select suitable method to display, the paper provides a new thinking. The paper introduces all kinds of method of visualization graph, such as: 2D, 3D graph, contour line, gradual change area coloring, pie chart, arrowhead, direction, speed, varying size and so on. It describes the method to realize these techniques. The new techniques include: three-dimensional graph technique based on open, cartoon technique, the algorithm of contouring, the algorithm of area drawing. The research indicates that it is inefficient to simply display the system data that isn't processed. The visualized vivid graph may express the information which need many data to indicate. Our visualization software not only expresses the system with the visualization graph method, but also carries on the analysis and the exc- avation in the system magnanimous data. It discovers intrinsic relation of the data to reflect the system mode accurately and displays them by the suitable visualization method. The information expressed in the distribution automation system is: state, capacity, margin, phase angle, speed, load distribution, movement tendency, stable region, geographic distribution information and so on. The display of the information is not isolated. It may simultaneously express many kinds of data meaning through one visualization method. Certainly, the time which the visualization technique is used in our country is not very long. Many companies use the overseas software directly. There are few companies to develop their own real-time application visualization software. Our company makes great progress in the research into computer graph technique and applying visualization technique in the real-time distribution automation system. We also have obtained the quite good effect. Along with the scale of our power system unceasing expansion, higher request, the visualization technique must be obtained broad application.
Large data visualization problems generally demand higher resolutions than are available on typical desktop displays. An isosurface from a multiterabyte data set may consist of several million polygons. A common 1600 .. 1200 display (1.92 million pixels) could at most display every polygon as a single pixel. This is insufficient resolution for examining the detail of the isosurface. Commodity hardware components provide one approach for handling large data visualization problems. For example, the PC gaming industry has exponentially increased the processing power of graphics cards to the point where they now are used in graphics supercomputers, such as the SGI Onyx4 and Prism. Commodity-basedvisualization clusters are becoming popular in the visualization community because of the high performance-to-cost ratio. Prior to the visualization cluster, graphics supercomputers, such as SGI’s Onyx systems, were needed for high-resolution, multimonitor capability. The Scientific VisualizationCenter (SVC) at the U.S. Army Engineer Research and Development Center Major Shared Resource Center has purchased a visualization cluster to explore the limits of this trend with respect to large, computational data sets. Results from central processing unit and graphics benchmarking between the Graphstream visualization cluster, an SGI Onyx340, and an SGI Prism will be presented.
This paper presents a Web portal system that supports visualization of a large number of data sets. On the Web site, users can browse or search for required data sets and select how the data should be visualized. Data sets can be retrieved from different locations via multiple protocols including SSH, HTTP, FTP and GridFTP. Visualization tasks can be executed on visualization servers at different locations on demand. Interactive visualization is supported. Visualization results can be stored and retrieved later. The portal is used in a Tsunami warning system where a lot of tsunami cases are simulated in order to identify risk areas along the coast of Thailand.
The paper explains the advantage of using low-cost, configurable, data visualization components, which can be embedded and distributed in electronic documents and reports. With the increasing use of electronic documents, distributed by intranets and the Internet, the opportunity to provide interactivevisualization techniques within scientific and engineering reports has become practicable. This new technology of components allows authors of a report to distribute with a specific “data viewer”, for example, allowing the recipients to interactively examine the data in the same way as the original analyst. A “thin” client, by definition, has minimal software requirements necessary to function as a user interface front-end for a Web enabled application and raises the issue of client vs. Server datavisualization rendering. Real-time visual data manipulation doesn't translate well into a “thin” client. While the VRML file format allows distribution of 3D visualization scenes to the Web, the user has no direct access to the underlying data sources. The “mapping” of numerical data into geometry format (VRML) takes place at the server side. In the “thin” client model, nearly all functionality is delivered from the server side of the visualization engine while the client perform very simple display and querying functions.
In order to clearly visualize the video structure and provide a user-friendly video content review and retrieve system, we firstly apply total variation (TV) metric to video shot segmentation and propose a novel video visualization method in this paper. Firstly, video data visualization design principles are proposed based on the analysis of video data characteristics. Secondly, a TV metric is proposed to measure the distance between two video frames. Then, video shots are detected by TV metric, and video key frames are extracted by K-means algorithm. Finally, according to the proposed videovisualization design principles, a spiral video visualization method is proposed. Experimental results show that the proposed video visualization method can express the video content and the whole structure directly and clearly.
As the long-term marine survey and research, especially with the development of new marine environment monitoring technologies, prodigious amounts of complex marine environment data keep continued and rapid growth. This paper recommends an integrative visualization solution to those data, to enhance the visual display of data and data archives, and to develop a joint use of those distributed data from different organizations or communities. Following this strategy, after the analysis of web services technologies and the concept definition of marine information gird, this paper focuses on the spatiotemporal visualization method and proposes a process-oriented spatiotemporal visualizationmethod. It also provides an original visualization architecture which is integrative and based on the explored technologies. Then it shows how the marine environment data are organized based on the spatiotemporal visualization method, and how the organized data are represented for use with web services and stored in a reusable fashion. Finally, the prototype system for the marine environment data of the South China Sea provides visualization of Argo floats, sea surface temperature fields, sea current fields, salinity, in-situ investigation data and ocean stations. The integration visualization architecture is illustrated on the prototype system which highlights the process-oriented temporal visualization method, which proves the good effect of the architecture and the method promoted by this work.
Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?.
This article looks at the current and future roles of information visualization, semantics visualization, and visual analytics in policy modeling. Many experts believe that you can't overestimate visualization's role in this respect.
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualizationsystem, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
This paper presents a framework that extends traditional visualization for supporting distributedvisualization in grid environments. This framework adopts the emerging Web services resource framework (WSRF) to deploy visualization algorithms as Web Service on grid nodes. Thesevisualization algorithms are developed by visualization toolkit (VTK) library. Triana, an open source problem solving environment, is used to fulfill a userpsilas requests for any deployed visualizationalgorithm on local or remote computing node. And Ganglia, a scalable distributed monitoring system for high-performance computing systems such as clusters and grids, by getting state information of each grid node, can help a user to select the appropriate grid nodes as computing nodes executing distributed visualization tasks. To evaluate the feasibility of the proposed framework, a case study is presented.
This paper describes an advanced visualization method for the analysis of defects in industrial 3D X-Ray Computed Tomography (XCT) data. We present a novel way to explore a high number of individual objects in a dataset, e.g., pores, inclusions, particles, fibers, and cracks demonstrated on the special application area of pore extraction in carbon fiber reinforced polymers (CFRP). After calculating the individual object properties volume, dimensions and shape factors, all objects are clustered into a mean object (MObject). The resulting MObject parameter space can be explored interactively. To do so, we introduce the visualization of mean object sets (MObject Sets) in a radial and a parallel arrangement. Each MObject may be split up into sub-classes by selecting a specific property, e.g., volume or shape factor, and the desired number of classes. Applying this interactive selection iteratively leads to the intended classifications and visualizations of MObjects along the selected analysis path. Hereby the given different scaling factors of the MObjects down the analysis path are visualized through a visual linking approach. Furthermore the representative MObjects are exported as volumetric datasets to serve as input for successive calculations and simulations. In the field of porosity determination in CFRP non-destructive testing practitioners use representative MObjects to improve ultrasonic calibration curves. Representative pores also serve as input for heat conduction simulations in active thermography. For a fast overview of the pore properties in a dataset we propose a local MObjectsvisualization in combination with a color-coded homogeneity visualization of cells. The advantages of our novel approach are demonstrated using real world CFRP specimens. The results were evaluated through a questionnaire in order to determine the practicality of the MObjects visualization as a supportive tool for domain specialists.
This paper proposes a vector field visualization, which mimics a sketch-like representation. Thevisualization combines two major perspectives: Large scale trends based on a strongly simplified field as background visualization and a local visualization highlighting strongly expressed features at their exact position. Each component considers the vector field itself and its spatial derivatives. The derivate is an asymmetric tensor field, which allows the deduction of scalar quantities reflecting distinctive field properties like strength of rotation or shear. The basis of the background visualization is a vector and scalar clustering approach. The local features are defined as the extrema of the respective scalar fields. Applying scalar field topology provides a profound mathematical basis for the feature extraction. All design decisions are guided by the goal of generating a simple to read visualization. To demonstrate the effectiveness of our approach, we show results for three different data sets with different complexity and characteristics.
Visualization has become an indispensable tool in many areas of science and engineering. In particular, the advances made in the field of visualization over the past 20 years have turned visualization from a presentation tool to a discovery tool. Machine learning has received great success in both data mining and computer graphics; surprisingly, the study of systematic ways to employ machine learning in making visualization is meager. Like human learning, we can make a computer program learn from previous input data to optimize its performance on processing new data. In the context of visualization, the use of machine learning can potentially free us from manually sifting through all the data. This paper describes intelligent visualization designs for three different applications: (1) volume classification andvisualization, (2) 4D flow feature extraction and tracking, (3) network scan characterization.
Supervised learning algorithm is the machine learning task of inferring a function from supervised training data. We introduce a new network data visualization framework that operates with different supervised algorithms. This is because the existing network data visualization tools are mostly designed for network administrator or advanced user. The fancy interface and complicated visualizationare only meaningful to network administrators and not to the beginner users. The purpose of this study is to reduce interface usability problems faced by network visualization users by creating tailored and skill-level specific visualizations based on real-time user feedback and machine learning algorithms. The proposed framework is also indirectly designed to assist in existing network data visualizationimplementation where the demand for visualizing different levels of network data details from different levels of computer users' perspective has never been fulfilled. Experiment showed that the proposed framework managed to generate usable interface, perform better visualization and capable to adapt to the user feedback in the network data visualization, which preserving its capabilities of intelligently adjusting the network data visualization to different levels of computer users.
This paper presents an extendable framework called “HENSON” that supports the development and application of genetic algorithm (“GA”) visualizations. During the last few years the application of software visualization technology to support people's understanding and use of evolutionary computation (“EC”) has been receiving increasing attention from within the EC community. However, the only visualization that could claim to be in common use is the “traditional” fitness versus time graph. It is suggested that the reason for the continuing lack of commonly used visualizations, is not due to a lack of good visualization design but rather a lack of good visualization support. In order for avisualization to be of practical use, the benefits of using the visualization must clearly outweigh the costs associated with producing it. Whilst the majority of EC visualization research continues to concentrate on the benefits of visualization, the work described in this paper concentrates on reducing the cost associated with producing visualizations. Thereby, improving the accessibility of visualizationfor GA users.
In computation flow visualization, integration based geometric flow visualization is often used to explore the flow field structure. A typical time-varying dataset from a Computational Fluid Dynamics (CFD) simulation can easily require hundreds of gigabytes to even terabytes of storage space, which creates challenges for the consequent data-analysis tasks. This paper presents a new technique for path-linesvisualization of extremely large time varying vector data using high performance computing. The high level requirements that guided the formulation of the new technique are (a) support for large dataset sizes, (b) support for temporal coherence of the vector data, (c) support for distributed memory high performance computing and (d) optimum utilization of the computing nodes with multi-cores (multi-core processors). The challenge is to design and implement a technique that meets these complex requirements and balances the conflicts between them. The fundamental innovation in this work is developing efficient distributed path-lines visualization for large time varying vector data. The maximum performance was reached through the parallelization of multiple processes on the multi-cores of each computing node. Accuracy of the proposed technique was confirmed compared to the results of theVisualization Tool Kit (VTK). In addition, the proposed technique exhibited acceptable scalability for different data sizes with better scalability for the larger ones. Finally, the utilization of the computing nodes was satisfactory for the considered test cases.
We describe the visualization of high dimensional marketing data for a financial asset management company. The data typically consists of 30 to a 100 variables of 25000 to half a million clients. We use the visualization of the correlation matrix as a variable selection tool which makes it easier to find patterns in the data. The user can then select data ranges of the selected variables and start a cluster analysis using 5 variables. The clustered data are then visualized as a set of spheres. In an additionalvisualization we first sort data values of a client variable and then visualize the sorted cubic data in a cube using volume rendering and isosurfaces. The interactive correlation visualization allows marketing researchers to quickly explore all kinds of combinations of variables, which enables them to find valuable client behavior patterns much faster. The cluster visualization allowed researchers to identify detailed groups of customer with similar behavior. Additionally, the visualization of the sorted cubic data gives in dept information of one variable over the total sample of customers. With these visualizations, a better understanding is given on customer behavior.
We present results from a user study that compared six visualization methods for two-dimensional vector data. Users performed three simple but representative tasks using visualizations from each method: 1) locating all critical points in an image, 2) identifying critical point types, and 3) advecting a particle. Visualization methods included two that used different spatial distributions of short arrow icons, two that used different distributions of integral curves, one that used wedges located to suggest flow lines, and line-integral convolution (LIC). Results show different strengths and weaknesses for each method. We found that users performed these tasks better with methods that: 1) showed the sign of vectors within the vector field, 2) visually represented integral curves, and 3) visually represented the locations of critical points. Expert user performance was not statistically different from nonexpert user performance. We used several methods to analyze the data including omnibus analysis of variance, pairwise t-tests, and graphical analysis using inferential confidence intervals. We concluded that using the inferential confidence intervals for displaying the overall pattern of results for each task measure and for performing subsequent pairwise comparisons of the condition means was the best method for analyzing the data in this study. These results provide quantitative support for some of the anecdotal evidence concerning visualization methods. The tasks and testing framework also provide a basis for comparing other visualization methods, for creating more effective methods and for defining additional tasks to further understand the tradeoffs among the methods. In the future, we also envision extending this work to more ambitious comparisons, such as evaluating two-dimensional vectors on two-dimensional surfaces embedded in three-dimensional space and defining analogous tasks for three-dimensional visualization methods.
Image-space line integral convolution (LIC) is a popular scheme for visualizing surface vector fields due to its simplicity and high efficiency. To avoid inconsistencies or color blur during the user interactions, existing approaches employ surface parameterization or 3D volume texture schemes. However, they often require expensive computation or memory cost, and cannot achieve consistent results in terms of both the granularity and color distribution on different scales. This paper introduces a novel image-space surface flow visualization approach that preserves the coherence during user interactions. To make the noise texture under different viewpoints coherent, we propose to precompute a sequence of mipmap noise textures in a coarse-to-fine manner for consistent transition, and map the textures onto each triangle with randomly assigned and constant texture coordinates. Further, a standard image-space LIC is performed to generate the flow texture. The proposed approach is simple and GPU-friendly, and can be easily combined with various texture-based flow visualization techniques. By leveraging viewpoint-dependent backward tracing and mipmap noise phase, our method can be incorporated with the image-based flow visualization (IBFV) technique for coherent visualization of unsteady flows. We demonstrate consistent and highly efficient flow visualization on a variety of data sets.
Exploration of the chemical space is an important component of drug discovery process and its importance grows with the increase in the computation power which allows to explore larger areas of the chemical space. Recently, there emerged new algorithms proposed to automatically generate and search for compounds (objects in the chemical space) with desired properties. Although these approaches can be a big help, human interaction is usually still inevitable in the end. Visualization of the space can help make sense of the generated data and therefore visualization techniques are usually an integral part of any task related to chemical space exploration. Currently, there exist methods dealing with visualization of the chemical space but there is no framework supporting simple development of new methods. The purpose of this paper is to introduce such a modular framework called ViFrame. ViFrame offers the possibility to implement every single part of the visualization pipeline consisting of steps such as reading and merging molecules from multiple data sources, applying transformations and, of course, visualization of the data set in 2D space. The advantage of the framework consists in providing an environment where the user can focus on the development of the previously mentioned tasks while the framework supports seamless integration of the developed components. The framework also incorporates an application that provides the user with graphical interface for modules manipulation and presentation of the visualization results. For simple utilization of the application without the necessity of implementation of one's own module, several visualization methods have been implemented.
Asymmetric tensor field visualization can provide important insight into fluid flows and solid deformations. Existing techniques for asymmetric tensor fields focus on the analysis, and simply use evenly-spaced hyperstreamlines on surfaces following eigenvectors and dual-eigenvectors in the tensor field. In this paper, we describe a hybrid visualization technique in which hyperstreamlines and elliptical glyphs are used in real and complex domains, respectively. This enables a more faithful representation of flow behaviors inside complex domains. In addition, we encode tensor magnitude, an important quantity in tensor field analysis, using the density of hyperstreamlines and sizes of glyphs. This allows colors to be used to encode other important tensor quantities. To facilitate quick visual exploration of the data from different viewpoints and at different resolutions, we employ an efficient image-space approach in which hyperstreamlines and glyphs are generated quickly in the image plane. The combination of these techniques leads to an efficient tensor field visualization system for domain scientists. We demonstrate the effectiveness of our visualization technique through applications to complex simulated engine fluid flow and earthquake deformation data. Feedback from domain expert scientists, who are also co-authors, is provided.
Visualization is an important technology in group deliberation environment (GDE). This paper proposes three types visualized information in GDE such as the visualization of discussion information, thevisualization of the state of consensus building and the visualization of the result of deliberation. The discussion information is visualized by the control of TreeView which displays experts' speeches as a dialogue tree. The state of consensus building and the result of deliberation are visualized by the control of MSChart. The paper uses the proposed technique to develop a group deliberation support system, the result shows that it makes the group grasp the process of deliberation efficiently and facilitates consensus building.
Analysis of power system dynamics helps to understand the operation of a power system. Therefore, it is significant to design and develop advanced visualization tools to interpret frequency, voltage magnitude, and phase angle information so that it can be presented to the operators in an intuitive manner. On the basis of the measurement data collected by widely-distributed frequency disturbance recorders (FDRs), visualization tools have been implemented for the FNET system. A number of FNETvisualization tools are discussed in this paper, including real-time visualization, animated event replay,visualization of oscillation mode analysis and visualization of propagation effects in two dimensional systems. These tools correlate the FDR measurements with their corresponding geographical information, and transform the combined matrices into different graphic formats using various computer techniques and programming languages. The graphics generated by these tools facilitate power system operation by allowing an operator to monitor power system dynamics, perform post-event analysis and identify modal oscillations more efficiently.
Exploring complex, very large data sets requires interfaces to present and navigate through thevisualization of the data. Two types of audience benefit from such coherent organization and representation: first, the user of the visualization system can examine and evaluate their data more efficiently; second, collaborators or reviewers can quickly understand and extend the visualization. The needs of these two groups are addressed by the spreadsheet-like interface described in this paper. The interface represents a 2D window in a multidimensional visualization parameter space. Data is explored by navigating this space via the interface. The visualization space is presented to the user in a manner that clearly identifies which parameters correspond to which visualized result. Operations defined on this space can be applied which generate new parameters or results. Combined with a general-purpose interpreter, these functions can be utilized to quickly extract desired results. Finally, by encapsulating the visualization process, redundant exploration is eliminated and collaboration is facilitated. The efficacy of this novel interface is demonstrated through examples using a variety of data sets in different domains.
Management information features multi-dimensional and multi-level structure, which is difficult to be presented clearly by the ordinary information visualization. In addition to the introduction of the definition and characteristics of management information visualization, the paper put forwards the method of management information visualization focusing the information visualization unit. The concept ofvisualization unit discussed in this paper, and it will be the atom facts in management informationvisualization (MIV) area and compose the basic of the MIV.
The information visualization is important as it makes the appropriate acquisition of the information through the visualization possible. The choice of the most appropriate information visualization method before commencing with the resolution of a given visual problem is primordial to obtaining an efficient solution. This article has as its objective to describe an information visualization meta-model classification approach based on treemap, which is able to identify the best information visualizationmodel for a given problem. The actual state of the visualization field is described, and then the rules and criteria used in our research are shown, with the aim of presenting a proposal for meta-informationvisualization model based on treemap, inspired upon the periodic table meta-model.
This paper follows a method to effectively provide useful visualizations to the examination timetabling problem. The results are some interactive visualization that should provide a much more sensible way of analysis, since it requires having to pay more attention for a complete understanding of the data and also, offers more data at the same time frame and space for the schools. We present the analysis we have made to effectively provide the visualization that built-in us, the Prefuse Visualization Toolkit to develop within tree view on the data and other information visualization techniques are being handled based on the suitability. We call this raw data as preprocessing since it is not put into any scheduler to generate timetables. Finally, interactive visualization has helped timetablers, to improve on assigning of timeslots for the exams in a particular school.
Currently, many visualization methods are used in computer assisted medicine. It is commonly considered that a unique visualization scheme makes difficult the interaction and limits the quality and quantity of the information shown. In this paper we study the specific requirements of a maxillofacial surgery simulation tool for facial appearance prediction. The different stages of the application lead to present medical information in different ways. We propose to adapt visualization techniques to give a more suitable answer to these needs: a hybrid volumetric and polygonal visualization for the planning stage, as well as a scheme for surgery definition in 2D. Finally, we propose the use of meshvisualization for the simulated model, which previously requires the 3D reconstruction of the surface in order to be visualized.
Between 1994 and 1997, researchers at Southwest Research Institute (SwRI) investigated methods for distributing parallel computation and data visualization under the support of an internally funded Research Initiative Program entitled the Advanced Visualization Technology Project (AVTP). A hierarchical data cache architecture was developed to provide a flexible interface between the modeling or simulation computational processes and data visualization programs. Compared to conventional post facto data visualization approaches, this data cache structure provides many advantages including simultaneous data access by multiple visualization clients, comparison of experimental and simulated data, and visual analysis of computer simulation as computation proceeds. However, since the data cache was resident on a single workstation, this approach did not address the issue of scalability of methods for avoiding the data storage bottleneck by distributing the data across multiple networked workstations. Scalability through distributed database approaches is being investigated as part of the Applied Visualization using Advanced Network Technology Infrastructure (AVANTI) project. This paper describes a methodology currently under development that is intended to avoid bottlenecks that typically arise as the result of data consumers (e.g. visualization applications) that must access and process large amounts of data that has been generated and resides on other hosts, and which must pass through a central data cache prior to being used by the data consumer.
The pervasive concept of cloud computing suggests that visualization, which is both data and computing intensive, is a perfect cloud computing application. This paper presents a sketch of an interface design for an online visualization service. To make such a service attractive to a wider audience, its user interface must be simple and easy to use for both casual and expert users. We envision an interface that supports visualization processes mainly directed by browsing and assessing existing visualizations in terms of images and videos will be very appealing to, in particular, casual users. That is, the aim is to maximize the utilization of the rich visualization data on the web. Without losing generality, we consider volume data visualization applications for our interface design. We also discuss issues in organizing online visualization data, and constructing and managing a rendering cloud.
Information uncertainty is inherent in many problems and is often subtle and complicated to understand. Although visualization is a powerful means for exploring and understanding information, information uncertainty visualization is ad hoc and not widespread. This paper identifies two main barriers to the uptake of information uncertainty visualization: first, the difficulty of modeling and propagating the uncertainty information and, second, the difficulty of mapping uncertainty to visual elements. To overcome these barriers, we extend the spreadsheet paradigm to encapsulate uncertainty details within cells. This creates an inherent awareness of the uncertainty associated with each variable. The spreadsheet can hide the uncertainty details, enabling the user to think simply in terms of variables. Furthermore, the system can aid with automated propagation of uncertainty information, since it is intrinsically aware of the uncertainty. The system also enables mapping the encapsulated uncertainty to visual elements via the formula language and a visualization sheet. Support for such low-level visual mapping provides flexibility to explore new techniques for information uncertainty visualization.
The use of visualization to present information is not a new phenomenon. It has been used in maps, scientific drawings, and data plots for over a thousand years. Examples from cartography include Ptolemy's Geographia (2nd Century AD), a map of China (1137 AD), and Minard's map (1861) of Napoleon's invasion of Russia a century and a half ago. Most of the concepts learned in devising these images carry over in a straight forward manner to computer visualization. Edward Tufte has written three critically acclaimed books that explain many of these principles. Computer graphics has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the publication of Visualization in Scientific Computing, a special issue of Computer Graphics. Since then, there have been several conferences and workshops, co-sponsored by the IEEE Computer Society and ACM SIGGRAPH, devoted to the general topic, and special areas in the field, for example volume visualization.it is the image processing Most people are familiar with the digital animations produced to present meteorological data during weather reports on television, though few can distinguish between those models of reality and the satellite photos that are also shown on such programs. TV also offers scientific visualizations when it shows computer drawn and animated reconstructions of road or airplane accidents. Some of the most popular examples of scientific visualizations are computer-generated images that show real spacecraft in action, out in the void far beyond Earth, or on other planets. Dynamic forms of visualization, such as educational animation or timelines, have the potential to enhance learning about systems that change over time. Apart from the distinction between interactive visualizations and animation, the most useful categorization is probably between abstract and model-based scientific visualizations. The abstract visualizations show completely conceptual constructs in 2D or 3D. These generated shapes are completely arbitrary. The model-based visualizations either place overlays of data on real or digitally constructed images of reality or make a digital construction of a real object directly from the scientific data. Scientific visualization is usually done with specialized software, though there are a few exceptions, noted below. Some of these specialized programs have been released as open source software, having very often its origins in universities, within an academic environment where sharing software tools and giving access to the source code is common. There are also many proprietary software packages of scientific visualization tools. Models and frameworks for building visualizations include the data flow models popularized by systems such as AVS, IRIS Explorer, and VTK toolkit, and data state models in spreadsheet systems such as the Spreadsheet for Visualization and Spreadsheet for Images. Applications of visualization A scientific visualization of a simulation of a Raleigh-Taylor instability caused by two mixing fluids. As a subject in computer science, scientific visualization is the use of interactive, sensory representations, typically visual, of abstract data to reinforce cognition, hypothesis building, and reasoning. Data visualization is a related subcategory of visualization dealing with statistical graphics and geographic or spatial data (as in thematic cartography) that is abstracted in schematic form. Scientific visualization Scientific visualization is the transformation, selection, or representation of data from simulations or experiments, with an implicit or explicit geometric structure, to allow the exploration, analysis, and understanding of the data. Scientific visualization focuses and emphasizes the representation of higher order data using primarily graphics and animation techniques. It is a very important part of visualization and maybe the first one, as the visualization of experiments and phenomena is as old as science itself. Traditional areas of scientific visualization are flow visualization, medical visualization, astrophysical visualization, and chemical visualization. There are several different techniques to visualize scientific data, with isosurface reconstruction and direct volume rendering being the more common. Educational visualization Educational visualization is using a simulation not usually normally created on a computer to create an image of something so it can be taught about. This is very useful when teaching about a topic that is difficult to otherwise see, for example, atomic structure, because atoms are far too small to be studied easily without expensive and difficult to use scientific equipment. Information visualization Information visualization concentrates on the use of computer-supported tools to explore large amount of abstract data. The term "information visualization" was originally coined by the User Interface Research Group at Xerox PARC and included Dr. Jock Mackinlay. Practical application of information visualization in computer programs involves selecting, transforming, and representing abstract data in a form that facilitates human interaction for exploration and understanding. Important aspects of information visualization are dynamics of visual representation and the interactivity. Strong techniques enable the user to modify the visualization in real-time, thus affording unparalleled perception of patterns and structural relations in the abstract data in question. Knowledge visualization The use of visual representations to transfer knowledge between at least two persons aims to improve the transfer of knowledge by using computer and non-computer-based visualization methods complementarily. Examples of such visual formats are sketches, diagrams, images, objects, interactive visualizations, information visualization applications, and imaginary visualizations as in stories. While information visualization concentrates on the use of computer-supported tools to derive new insights, knowledge visualization focuses on transferring insights and creating new knowledge in groups. Beyond the mere transfer of facts, knowledge visualization aims to further transfer insights, experiences, attitudes, values, expectations, perspectives, opinions, and predictions by using various complementary visualizations. See also: picture dictionary, visual dictionary Product visualization Product visualization involves visualization software technology for the viewing and manipulation of 3D models, technical drawing and other related documentation of manufactured components and large assemblies of products. It is a key part of product lifecycle management. Product visualization software typically provides high levels of photorealism so that a product can be viewed before it is actually manufactured. This supports functions ranging from design and styling to sales and marketing. Technical visualization is an important aspect of product development. Originally technical drawings were made by hand, but with the rise of advanced computer graphics the drawing board has been replaced by computer-aided design (CAD). CAD-drawings and models have several advantages over hand-made drawings such as the possibility of 3-D modeling, rapid prototyping, and simulation. Systems visualization Systems visualization is a new field of visualization which integrates and subsumes existing visualization methodologies and adds to it narrative story telling, visual metaphors (from the field of advertising) and visual design. It also recognizes the importance of complex systems theory, the interconnections of systems of systems and the need of knowledge representation through ontologies.Systems visualization provides a viewer of systems visualization the ability to quickly understand the complexity of a system. Unlike other visualization approaches - such as data visualization, information visualization, flow visualization, scientific visualization and network visualization, which focus mainly on data representation - systems visualization seeks to provide new way to visualize complex systems of systems through an integrative approach. Visual communication Visual communication is the communication of ideas through the visual display of information. Primarily associated with two dimensional images, it includes: alphanumerics, art, signs, and electronic resources. Recent research in the field has focused on web design and graphically-oriented usability. Visual analytics Visual analytics focuses on human interaction with visualization systems as part of a larger process of data analysis. Visual analytics has been defined as "the science of analytical reasoning supported by the interactive visual interface".Its focus is on human information discourse (interaction) within massive, dynamically changing information spaces. Visual analytics research concentrates on support for perceptual and cognitive operations that enable users to detect the expected and discover the unexpected in complex information spaces.Technologies resulting from visual analytics find their application in almost all fields, but are being driven by critical needs (and funding) in biology and national security.