Awareness and understanding of a set of information and the ways information can be made useful.

Advancements in Metro Regional and Core Transport Network Architectures for the Next-Generation Internet

Loukas Paraschis, in Optical Fiber Telecommunications (Sixth Edition), 2013

18.5 The Network Value of Optical Transport Innovation

The introduction of DWDM nodes with flexible ROADM and multi-directional switching (Section 18.3.4), combined with the 100 Gb/s DWDM robust transmission (Section 18.3.2), has enabled wavelength level bypass to successfully optimize the IPNGN transport architecture, as we summarized in the previous paragraph. From this starting point, a more dynamic optical transport provisioning, protection, and restoration has been often advocated to provide additional benefits. Although more feasible technologically, this vision has also to address carefully the questions of what are the key services, the main savings, and the optimal technology adoption during each implementation step of the IPNGN evolution.

From a services perspective, a more dynamic transport network would be justified by requirements for faster provisioning. More specifically, a dynamic DWDM layer would imply the requirement for on-demand wavelength services [23]. With the exception of special purpose, mostly research, networks, there has been very little need for dynamic wavelength services [23]. On the contrary, service providers are increasingly relying on IP/MPLS transport for dynamic provisioning and protection of packet-based services. IP/MPLS (L3) transport (Section 18.3.1) enables network operators to achieve fast, cost-effective, network utilization leveraging stat-multiplexing with the ability for elaborate QoS prioritization and TE. At the same time, OTN has provided a transport layer for non-IP/MPLS traffic, mainly including the migration of traditional SONET/SDH private-line services. In this sense, OTN has increasingly served the provisioning of such TDM-based, connection oriented services, and their related shared-meshed protection [24]. Figure 18.8a summarizes the current consensus around the transport layer for each network service. Specifically for Ethernet private line (EPL), an interesting debate has emerged as either IP/MPLS or OTN could be valid, and have been adopted, depending on the operational (and organizational) details of each specific network operator and their preferred method for offering the bandwidth reservation required for the specific application carried over the EPL. The bandwidth reservation mechanism has also been increasingly important for the inter-DC interconnections, and their sub-wavelength and wavelength level provisioning [25].

Awareness and understanding of a set of information and the ways information can be made useful.

Figure 18.8. (a) The transport layer for the different network service (left) [6,7] and (b) the value of sub-wavelength multiplexing (right) [7].

From a network cost savings perspective, the extensive bypass of core routers has been the most commonly proposed, and debated, value for dynamic optical transport [7]. In this proposal, capacity is provisioned by an intermediate sub-wavelength switching layer based on OTN (or alternatively MPLS-TP or Ethernet) and a dynamic DWDM layer, which interconnect directly the service (edge) routers. However, such a mesh router architecture presents a few challenges, related to efficiently forecasting, provisioning, and maintaining many more (typically an order of magnitude more—see Section 18.2) point-to-point links, and layer-3 adjacencies. Recent, detailed CapEx studies [26] have established that an OTN-based router bypass would usually increase more than 30% the overall transport cost for typical IP networks. The exact amount of extra cost depends on the proportion of transit traffic, and the relative cost of OTN vs. the cost of the bypassed routers. Network cost could increase further when OTN protection of IP traffic is also taken into consideration [27], and when OpEx inefficiencies due to the additional layer are included. The most important limitation of any future architecture based on router bypass, however, may be that such an IPNGN transport evolution is focused on minimizing the IP/MPLS router cost which is a decreasingly important proportion (Figure 18.5) of the future networks [7,9].

As the cost of 100 Gb/s DWDM transponders starts dominating CapEx, efficient wavelength utilization may be the most important network optimization target. For efficient wavelength utilization, statistical multiplexing makes hierarchical packet transport very useful and extensive router bypass potentially less efficient. Wavelength bypass remains valid for stable traffic in fully utilized wavelengths. Sub-wavelength multiplexing (Figure 18.8b) also becomes useful for aggregating many small stable flows into a single wavelength, especially when flows have slow growth thus minimizing the need for re-provisioning. The OTN digital hierarchy allows efficient subwavelength multiplexing when each flow is not much less than 1 Gb/s (the lowest OTN granularity). OTN multiplexing could also combine IP/MPLS and non-IP traffic into the same wavelength, making it able to better utilize the deployed capacity. In this sense, OTN sub-wavelength multiplexing would be beneficial in the parts of the network with traffic flows that are small proportionally to the wavelength capacity, have slow growth, and may include a significant proportion of non-IP services [37].

On the other hand, for high capacity, fast-growing IP/MPLS traffic, the convergence of DWDM and IP/MPLS transport layers is considered the most cost-effective IPNGN evolution [4–7]. We have already discussed (Section 18.4) the benefits in OAM&P based on the OTN encapsulation, including proactive protection from pre-FEC monitoring (Figure 18.6). Improvements in network efficiency could also be achieved from coordinated L3 and L1 provisioning or restoration based on multi-layer control-plane (Section 18.3.5) and wavelength bypass. Recent advancements in multi-layer control-plane implementations have been a promising transport innovation. An innovative such framework was reported in [16]. A new set of functions that improve network operations has been built around extensions to GMPLS standards [28]. The aim is to create a new standard that balances between the two previous, diametrical GMPLS models: the “peer model” which required too much information exchange leading to complexity, and the “overlay model” which enabled too little exchange resulting in inefficiency. To this end, the DWDM “server” layer is sharing, through a UNI, the required information about the optical paths that constitute the links of the IP “client” layer (Figure 18.9). Similarly, the IP layer can also share requirements with the optical layer. At the same time, the participating layers remain independent and reasonably decoupled; as such, for example the IP/MPLS layer may continue to run multi-level ISIS, while the DWDM ROADM layer may be running OSPF. This decoupling makes the definition of multi-layer control-plane particularly powerful for allowing the two layers to more easily scale independently while maintaining the chosen organizational segmentations and leveraging their operational expertise. It also fosters multi-vendor solutions. In the most general case, there could be more than two layers in the network, with more than one client-server (UNI) interaction, as in the example depicted in Figure 18.9b, however generally the functionality becomes more complex with more layers.

Awareness and understanding of a set of information and the ways information can be made useful.

Figure 18.9. The multi-layer control-plane UNI between (a) IP/MPLS and DWDM (left), and (b) in the case of more than two layers, with more than one tier of client-server (UNI) interactions (right) [16].

This information awareness could eliminate multi-layer inefficiencies, and support improved network planning and operation. The goal is to maintain the SLA of the IP/MPLS services at reduced overall network cost. Conversely, other GMPLS features could allow improving the SLA at equal total network cost. For example, improved network availability can be achieved by leveraging DWDM SRLG information to guide IP/MPLS routing decisions. Operational benefits could stem from optimization based on cost, or latency, or improve handling of catastrophic failures. More specifically, a successful GMPLS implementation can enable network optimization during:

Normal operation, by sharing optical layer information (e.g. SRLGs or latency) with the client layer.

New connection setup, by improving the QoS, e.g. selecting the lowest latency path.

Traffic or network changes, through path re-optimization, or by rerouting traffic appropriately (e.g. during a maintenance window).

Restoration, by requesting the optical layer to restore a connection upon physical layer failure, and subsequent network re-optimization when the network recovers from this failure.

From an operational perspective, adoption in current networks may initially be easier during normal operation, since this does not require operational adjustments to connection setup or rerouting. This also requires smaller extensions to the existing GMPLS UNI [28]. The adoption of DWDM routing based on IP requirements will be more challenging, as this requires less mature automation in ROADM and impairment-aware WSON [29]. Eventually, however, MLCP network optimization during traffic and network changes could offer significant TCO savings, particularly since currently most IP links operate normally at less than 50% utilization due to the requirement for sufficient protection capacity against failures at the IP layer.

More specifically, in the event of a link failure, an advanced MLCP implementation combined with a flexible DWDM transport layer can offer over 20% CapEx savings [28], when (A) the IP/MPLS layer is designed assuming node failures is a low risk event, and (B) the SLAs of some best-effort traffic allows for a few seconds of recovery time under failure. In such a case, best-effort traffic without strict failure-related SLA guarantees remains only partially protected in the IP layer until the DWDM layer restores the failed links over alternative DWDM paths, as depicted in Figure 18.10. At this point, peak traffic will be supported again for all traffic classes. The network savings would increase proportionally to the percentage of best-effort traffic, because minimal additional protection bandwidth must be pre-provisioned for such traffic in the IP/MPLS layer.

Awareness and understanding of a set of information and the ways information can be made useful.

Figure 18.10. Normal operation (left), and DWDM restoration of a failed link over alternative paths (right) [28].

Link failure restoration is indeed becoming increasingly important for future network operations, because next-generation routers (with multi-chassis configurations and distributed operating systems) would achieve significantly higher system availability, thus making node failures less common. However, the practical limitations of the deployed, mostly legacy, DWDM systems prevent currently the wide adoption of optical restoration [23]. The new flexible DWDM transport layer, once widely deployed, and combined with advanced WSON and GMPLS implementations could enable optical restoration within the required time-scales, typically no more than several 10 s of seconds [14,28,29]. Note that, unlike optical protection, the GMPLS restoration scheme kicks in after the IP layer has quickly protected (e.g. FRR) some (or all) of the traffic, relying on QoS to ensure best use of the potentially lower L3 capacity until L1 fully recovers the links. Multiple restoration path computation options could be explored, including trying a pre-planned path first, followed by calculating a dynamic path.

More generally, network optimization based on a centralized path computation element (PCE) [30], and implemented in combination with a distributed, open, multi-vendor, MLCP, has been proposed as an additional advancement of the converged (L1 and L3) transport. In this vision, PCE network optimization continuously maximizes the IP/MPLS and optical transport, preventing the network from getting gradually less optimal (as is common in today’s networks). In today’s networks, optimization is typically performed using offline modeling tools, because network wide traffic matrix cannot be collected real time, and manual input of future traffic forecast is typically required. The benefits from continuously optimized network operation would generally increase as the traffic becomes more dynamic and the network more adaptable. However, the current limitations, particularly in the DWDM layer automation, and from legacy network management systems (NMS, OSS), prevent a practical implementation, or a realistic account of the network value of this vision.

Given the remaining limitations from legacy deployments, and DWDM automation, a radically alternative approach to L3 and L1 convergence is also being actively explored based on a very simple transport architecture with next-generation multi-Tb/s routers directly interconnected using the highest possible capacity DWDM point-to-point links, e.g. [31]. In this approach, network automation occurs only at the IP/MPLS layer, thus simplifying operations and minimizing cross-layer coordination. At the same time, many of the synergies from IP and DWDM convergence, like proactive protection, still benefit network operations. Also, the simplification of the DWDM layer could allow for lower cost DWDM systems by reducing OADM and optical amplifier complexity. The inherit simplicity of this architecture, at least from an optical transport perspective, makes it more readily available for scaling IPNGN to hundreds of Tb/s. The main challenges here arise from cost-effectively scaling the IP/MPLS routers and high-speed DWDM transmission. We explore some of the more promising future technologies that could enable this evolution in the next section.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123969606000183

Beyond diversity: moving towards inclusive work environments

Paula M. Smith, in Workplace Culture in Academic Libraries, 2013

Literature review

Similarly to other professions, the influence of organizational culture on organizational performance and productivity has been highlighted in the library literature for the past couple of decades. While diversity has been a significant aspect of the discussion, with the emergence of increased immigration, virtual exchanges (globally and domestically), and expanded categories of diversity (i.e. sexual orientation, disability, religion, among others), attention to workplace interactions and behaviors is becoming a prominent concern of library administrators.

Various publications recount diversity awareness programs that employ cultural seminars, food fests, and library outreach initiatives. These programs are primarily intended to convey information and raise awareness about the cultural communities served by the library. Despite the aforementioned initiatives being important sources for learning about different cultural entities, further research is needed to address the cultural missteps and conflict negotiated between library personnel. Love’s take on diversity initiatives in academic libraries suggests: “Approaches to diversity must be simultaneous and inclusive, occurring at the individual and organizational levels.”1 Therefore, it is just as significant to learn how to act effectively as cultural and civil citizens in the work environment as it is to learn about the cultural attributes of our constituencies; in fact, each informs the other. Additionally, there are various accounts of diversity initiatives focused on recruitment, retention, outreach, and collection development. However, precious few publications detail professional development activities to prepare library personnel to meet the challenges workplace diversity places on day-to-day interpersonal and inter-group relationships.

Underpinning diversity research are the changing demographics in the United States and the necessity to adapt strategies in our workplaces to reflect these shifts in our communities. While academic libraries have realized the increased presence of multicultural and international populations in their campus communities and have tried to develop a workforce reflective of these transformations, the overall change in the diversity of library workforces has not kept pace.2 This suggests that the library work environment is changing but an overall imbalance remains, which can lead to power differentials and intercultural miscommunications.

One of the few articles that address library organizational multiculturalism and diversity is Joan Howland’s standout publication entitled “Challenges of Working in a Multicultural Environment.” It is an instructive source for understanding cultural conflict in the library workplace and deftly outlines six potential areas of difficulty for library organizations. They are: 1) fluctuating power dynamics, 2) merging a diversity of opinions and approaches, 3) overcoming perceived lack of empathy, 4) the perception or reality of tokenism, 5) holding everyone throughout the organization accountable for achieving a positive multicultural environment, and 6) turning each of these difficulties into opportunities.3 Additionally, Gabriel, who has written several columns on diversity for legal libraries, concurs with aspects of Howland’s writing about managing conflict in a diverse workplace. She reminds us that: “Recognizing that organizational culture exists is one thing, but to take the information you perceive about an organization and understand who holds power within it can be more difficult.”4 Generally, Howland and Gabriel both touch on issues significant to workforce diversity and demonstrate that, as organizations continue to diversify, learning to navigate these contentious areas grows exponentially.

Another clear departure from the traditional body of diversity literature is the article by Alexandra Rivera and Ricardo Andrade, “Developing a Diversity-Competent Workforce.” They presented the University of Arizona’s (UA) comprehensive approach to developing diversity-competent employees and an inclusive work environment. Rather than overemphasizing recruitment as their primary tool for tackling workplace diversity, UA employed the climate survey tool, ARL ClimateQUAL, to examine workplace culture, followed by awareness-raising programs specific to the cultural issues uncovered by the survey.5 “ClimateQUAL is an assessment of library staff perceptions concerning (a) their library’s commitment to the principles of diversity, (b) organizational policies and procedures, and (c) staff attitudes.”6 Although climate surveys are essential instruments for delivering meaningful snapshots of interactions in the work environment, Winston and Li reported that only seven percent of academic libraries make use of them.7

Since academic libraries are no longer the homogeneous environments they once were, applying equal consideration to organizational culture and employee behaviors is not a difficult concept to grasp. However, with organizational work culture surfacing as an incredibly complex and challenging area for administrators to manage and define, the difficulty seems to reside in the implementation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978184334702650006X

Outreach in Chinese academic libraries

Shaorong Liu, in Academic Libraries in the US and China, 2013

Socializing and sharing

Although, as detailed above, the debate of opening academic libraries in China to the public has not yet been resolved, extension of academic library services to the general public should be a future trend. As the information awareness of the public increases, its information demand will increase as well. Academic library resources and professional librarians could help to meet the increasing demand from the public. Resource and service sharing among the universities in China could and should become a reality. Opening the academic library to companies and other institutions could provide a jumping off point, a segue to opening the library to the general public. Sharing collections with these companies and institutions could create a win-win situation for all parties involved, but further research is needed in this area in order to prepare and explore all options adequately.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843346913500068

Social media and Big Data

Alessandro Mantelero, Giuseppe Vaciago, in Cyber Crime and Cyber Terrorism Investigator's Handbook, 2014

Array of Approved eSurveillance Legislation

With regard to the first category and especially when the request is made by governmental agencies, the issue of the possible violation of fundamental rights becomes more delicate. The Echelon Interception System (European Parliament, 2001) and the Total Information Awareness (TIA) Program (European Parliament, 2001; European Parliament 2013a; European Parliament 2013b; DARPA. Total Information Awareness Program (TIA), 2002; National Research Council, 2008; Congressional Research Service. CRS Report for Congress, 2008) are concrete examples which are not isolated incidents, but undoubtedly the NSA case (European Parliament, 2013c; Auerbach et al., 2013; European Parliament, 2013a; European Parliament, 2013b)8 has clearly shown how could be invasive the surveillance in the era of global data flows and Big Data. To better understand the case, it’s quite important to have an overview of the considerable amount of electronic surveillance legislation which, particularly in the wake of 9/11, has been approved in the United States and, to a certain extent, in a number of European countries.

The most important legislation is the Foreign Intelligence Surveillance Act (FISA) of 19789 which lays down the procedures for collecting foreign intelligence information through the electronic surveillance of communications for homeland security purposes. The section 702 of FISA Act amended in 2008 (FAA) extended its scope beyond interception of communications to include any data in public cloud computing as well. Furthermore, this section clearly indicates that two different regimes of data processing and protection exist for U.S. citizens and residents (USPERs) on the one hand, and non-U.S. citizens and residents (non-USPERs) on the other. More specifically the Fourth Amendment is applicable only for U.S. citizens as there is an absence of any cognizable privacy rights for “non-U.S. persons” under FISA (Bowden, 2013).

Thanks to FISA Act and the amendment of 2008, U.S. authorities have the possibility to access and process personal data of E.U. citizens on a large scale via, among others, the National Security Agency’s (NSA) warrantless wiretapping of cable-bound internet traffic (UPSTREAM) and direct access to the personal data stored in the servers of U.S.-based private companies such as Microsoft, Yahoo, Google, Apple, Facebook or Skype (PRISM), through cross-database search programs such as X-KEYSCORE. U.S. authorities have also the power to compel disclosure of cryptographic keys, including the SSL keys used to secure data-in-transit by major search engines, social networks, webmail portals, and Cloud services in general (BULLRUN Program) (Corradino, 1989; Bowden, 2013). Recently the United States President’s Review Group on Intelligence and Communications Technologies released a report entitled “Liberty and Security in a Changing World.” The comprehensive report sets forth 46 recommendations designed to protect national security while respecting our longstanding commitment to privacy and civil liberties with a specific reference to on non-U.S. citizen (Clarke et al., 2014).

Even if the FISA Act is the mostly applied and known legislative tool to conduct intelligence activities, there are other relevant pieces of legislation on electronic surveillance. One need only to consider the Communications Assistance For Law Enforcement Act (CALEA) of 1994,10 which authorizes the law enforcement and intelligence agencies to conduct electronic surveillance by requiring that telecommunications carriers and manufacturers of telecommunications equipment modify and design their equipment, facilities, and services to ensure that they have built-in surveillance. Furthermore, following the Patriot Act of 2001, a plethora of bill has been proposed. The most recent bills (not yet in force) are the Cyber Intelligence Sharing and Protection Act (CISPA) of 2013 (Jaycox and Opsahl, 2013), which would allow Internet traffic information to be shared between the U.S. government and certain technology and manufacturing companies and the Protecting Children From Internet Pornographers Act of 2011,11 which extends data retention duties to U.S. Internet Service Providers.

Truthfully, the surveillance programs are not only in the United Sattes. In Europe, the Communications Capabilities Development Program has prompted a huge amount of controversy, given its intention to create a ubiquitous mass surveillance scheme for the United Kingdom (Barret, 2014) in relation to phone calls, text messages and emails and extending to logging communications on social media. More recently, on June 2013 the so-called program TEMPORA showed that UK intelligence agency Government Communications Headquarters (GCHQ) has cooperated with the NSA in surveillance and spying activities (Brown, 2013).12 These revelations were followed in September 2013 by reports focusing on the activities of Sweden’s National Defense Radio Establishment (FRA). Similar projects for the large-scale interception of telecommunications data has been conducted by both France’s General Directorate for External Security (DGSE) and Germany’s Federal Intelligence Service (BDE) (Bigo et al., 2013).

Even if it seems that E.U. and U.S. surveillance programs are similar, there is one important difference: In the E.U., under Data Protection law, individuals have always control of their own personal data while in U.S., the individual have a more limited control once the user has subscribed to the terms and condition of a service.13

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128007433000153

Instruction in Chinese academic libraries

Zhengjun Wang, in Academic Libraries in the US and China, 2013

Solutions

This may paint a bleak picture of the state of affairs in Chinese universities, but those in the field of library instruction see it as an opportunity. Progress is being made, and there are achievements in information education in academic libraries in China, from increasing competency regarding the manual location of print information to an enhancement of students’ information retrieval abilities using electronic tools. However, there is still room for improving information education. Instruction in information awareness, developing critical thinking skills, and information ethics education should be the focus of instruction programs in Chinese academic libraries.

One’s information awareness directly affects the level of one’s ability to use information effectively, and thus information literacy must serve as the foundation for students’ current and future academic lives. Critical thinking is the heart of information literacy education. Developing students’ critical thinking skills by training them to use information independently, effectively, and accurately enables them to integrate these strategies into their future research and daily lives. Students’ information awareness can be raised by improving their ability to capture, analyze, evaluate, and use information; all of this is accomplished through library instruction and various classroom activities which stress the importance of information. In this way, students are poised to make the most of information, and receive the highest level of benefit from it.

Introducing information ethics is also an integral part of the solution to current information problems. These ethical concepts can be illustrated by using real life experience and case studies to introduce the concept of fair use, as well as instruction on related legislation, which currently governs the legal ramifications of the use of certain types of information. Focusing on proper use of academic citations and related regulatory issues, such as emphasizing the restrictions on the number of and extent of electronic document downloads, are critical.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843346913500020

The Digital Twin Paradigm for Smarter Systems and Environments: The Industry Use Cases

Pethuru Raj, Chellammal Surianarayanan, in Advances in Computers, 2020

14 The future

Digital twin (DT) use cases have moved out of the conceptual stage to deliver real-world impacts across enterprises, which are currently executing their digital transformation initiatives. The faster proliferation of industrial IoT (IIoT) products have laid down a stimulating foundation for the widespread interest in this captivating phenomenon. That is, multiple sensors and actuators get lavishly attached with industry machineries and instruments. Further on, other purpose-specific and agnostic devices, appliances, wares, equipment, etc., in that environment get connected with the machineries. Still on, the physical twins are integrated with a dazzling array of cyber applications and databases over any network. In short, physical assets in association with many other physical and cyber systems collect multi-structured data and transmit them to faraway digital twin to squeeze out actionable insights and context-awareness information in time.

Coupled with increasingly data aggregation, simulation and knowledge visualization capabilities, new insights can be revealed to improve operational effectiveness, differentiate products and services, and increase worker productivity. And with augmented reality emerging as the new HMI to bring 3D content and real-time insights to workers, the projected era of digital twin is to see the reality.

Digital twin (DT) is already being leveraged by various industry houses to stay ahead of digital disruption by understanding changing customer preferences, market sentiments, technology advancements, etc. This technological paradigm is capable of bringing in radical changes to businesses such as delivering quality products for the knowledge-driven market quickly and extracting viable intelligence out of product data through DT to produce better and customer-aligned products. Thus, product intelligence is the crucial difference primarily achieved through the distinct competencies of the digital twin paradigm. The future is also bright.

The powerful emergence of artificial intelligence (AI) and cognitive computing can substantially increase the special abilities of the digital twin. Technologies such as machine and deep learning algorithms, computer vision, natural language processing (NLP), real-time analytics of big data, etc., are recognized as the futuristic thing for next-generation digital twins. Automated analytics is the new paradigm with the pioneering advancements in AI and with the combination of AI and digital twin is going to be game-changing for product engineering and advancement.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S006524581930049X

Data Mining

Colleen McCue, in Data Mining and Predictive Analysis, 2007

Is Data Mining Evil?

Further confounding the question of whether to acquire data mining technology is the heated debate regarding not only its value in the public safety community but also whether data mining reflects an ethical, or even legal, approach to the analysis of crime and intelligence data. The discipline of data mining came under fire in the Data Mining Moratorium Act of 2003.

Unfortunately, much of the debate that followed has been based on misinformation and a lack of knowledge regarding these very important tools. Like many of the devices used in public safety, data mining and predictive analytics can confer great benefit and enhanced public safety through their judicious deployment and use. Similarly, these same assets also can be misused or employed for unethical or illegal purposes.

One of the harshest criticisms has addressed important privacy issues. It has been suggested that data mining tools threaten to invade the privacy of unknowing citizens and unfairly target them for invasive investigative procedures that are associated with a high risk of false allegations and unethical labeling of certain groups. The concern regarding an individual's right to privacy versus the need to enhance public safety represents a long-standing tension within the law enforcement and intelligence communities that is not unique to data mining. In fact, this concern is misplaced in many ways because data mining in and of itself has a limited ability, if any, to compromise privacy. Privacy is maintained through restricting access to data and information. Data mining and predictive analytics merely analyze the data that is made available; they may be extremely powerful tools, but they are tools nonetheless. With data mining, ensuring privacy should be no different than with any other technique or analytical approach.

Unfortunately, many of these fears were based on a misunderstanding of the Total Information Awareness system (TIA, later changed to the Terrorism Information Awareness system), which promised to combine and integrate wide-ranging data and information systems from both the public and private sectors in an effort to identify possible terrorists. Originally developed by the Defense Advanced Research Projects Agency (DARPA), this program was ultimately dismantled, due at least in part to the public outcry and concern regarding potential abuses of private information. Subsequent review of the program, however, determined that its main shortcoming was related the failure to conduct a privacy impact study in an effort to ensure the maintenance of individual privacy; this is something that organizations considering these approaches should include in their deployment strategies and use of data-mining tools.

On the other hand, some have suggested that incorporation of data mining and predictive analytics might result in a waste of resources. This underscores a lack of information regarding these analytical tools. Blindly deploying resources based on gut feelings, public pressure, historical precedent, or some other vague notion of crime prevention represents a true waste of resources. One of the greatest potential strengths of data mining is that it gives public safety organizations the ability to allocate increasingly scarce law enforcement and intelligence resources in a more efficient manner while accommodating a concomitant explosion in the available information—the so-called “volume challenge” that has been cited repeatedly during investigations into law enforcement and intelligence failures associated with 9/11. Data mining and predictive analytics give law enforcement and intelligence professionals the ability to put more evidence-based input into operational decisions and the deployment of scarce resources, thereby limiting the potential waste of resources in a way not available previously.

Regarding the suggestion that data mining has been associated with false leads and law enforcement mistakes, it is important to note that these errors happen already, without data mining. This is why there are so many checks and balances in the system—to protect the innocent. We do not need data mining or technology to make errors; we have been able to do that without the assistance of technology for many years. There is no reason to believe that these same checks and balances would not continue to protect the innocent were data mining to be used extensively. On the other hand, basing our activities on real evidence can only increase the likelihood that we will correctly identify the bad guys while helping to protect the innocent by casting a more targeted net. Like the difference between a shotgun and a laser-sited 9mm, there is always the possibility of an error, but there is much less collateral damage with the more accurate weapon.

Again, the real issue in the debate comes back to privacy concerns. People do not like law enforcement knowing their business, which is a very reasonable concern, particularly when viewed in light of past abuses. Unfortunately, this attitude confuses process with input issues and places the blame on the tool rather than on the data resources tapped. Data mining can only be used on the data that are made available to it. Data mining is not a vast repository designed to maintain extensive files containing both public and private records on each and every American, as has been suggested by some. It is an analytical tool. If people are concerned about privacy issues, then they should focus on the availability of and access to sensitive data resources, not the analytical tools. Banning an analytical tool because of fear that it will be misused is similar to banning pocket calculators because some people use them to cheat on their taxes.

As with any powerful weapon used in the war on terrorism, the war on drugs, or the war on crime, safety starts with informed public safety consumers and well-trained personnel. As is emphasized throughout this text, domain expertise frequently is the most important component of a well-informed, professional program of data mining and predictive analytics. As such, it should be seen as an essential responsibility of each agency to ensure active participation on the part of those in the know; those professionals from within each organization that know where the data came from and how it will be used. To relinquish the responsibility for analysis to outside organizations or consultants should be viewed in the same way as a suggestion to entirely contract patrol services to a private security corporation: an unacceptable abdication of an essential responsibility.

Unfortunately, serious misinformation regarding this very important tool might limit or somehow curtail its future use when we most need it in our fight against terrorism. As such, it is incumbent upon each organization to ensure absolute integrity and an informed decision-making process regarding the use of these tools and their output in an effort to ensure their ongoing availability and access for public safety applications.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780750677967500258

Why are we here and where do we want to go? Program mission, goals and objectives

Nancy W. Noe, in Creating and Maintaining an Information Literacy Instruction Program in the Twenty-First Century, 2013

Defining your purpose

Your mission statement must promote lifelong learning and professional development (ACRL, 2012a).

An information literacy instruction program helps students learn the skills necessary for academic success, but its impact should not stop at graduation. We are also preparing students for professional and personal success. While information literacy has been a fundamental concept within education for decades, the workplace, less familiar with the nomenclature of our profession, nonetheless recognizes the same skill components (Leavitt, 2011; Lloyd, 2011). In an investigative study among various professionals, the following practices were identified as information literacy experiences:

using information technology for information awareness and communication;

finding information from appropriate sources;

executing a process;

controlling information;

building a personal knowledge base in a new area of interest;

working with knowledge and personal perspectives adopted in such a way that novel insights are gained; and

using information widely for the benefit of others.

(Bruce, 1999)

Similarly, in a survey of corporate managers conducted by James Madison University (JMU) Library in collaboration with JMU’s College of Business, when asked about necessary research and information skills for entry-level workers, participants cited “the ability to synthesize, summarize, and present information; the ability to perform data analysis; and the ability to think critically and creatively about research topics and findings” (Sokoloff, 2012: 11).

Librarians must consider workforce development. Here’s an example. I teach a number of general English composition classes. While the composition curriculum has moved well beyond the traditional literature intensive focus, a few instructors still want library classes to focus solely on a particular subject-specific database or narrow literature topic. The argument I use when planning information literacy outcomes for those classes is that we, for the most part, are not training future academicians, but career professionals: accountants, nurses, engineers and a variety of other professionals. They are going to become the professionals we will rely on for our health, our bridges or our audit-free tax returns. One of the main objectives of a library session should be to help students learn to access, evaluate and select credible and reliable resources; practices we hope continue as they make information choices within the workplace.

Information needs also permeate everyday life. For example, the number of adult users who are turning to the Internet to access health information is growing (Fox and Jones, 2009). While health educators are contending with web-based health information needs in clinical settings (Wright and Grabowsky, 2011), in working with patients they are also building upon and reiterating the same kind of selection and evaluation strategies students may have learned as they worked on their first research paper. Any number of social interactions require citizens to be information savvy. For example, the voting public, bombarded with print and visual political rhetoric, must be able to navigate the onslaught of candidates’ promotional materials and make an informed decision during an election. Parents may need to research and evaluate area school systems. Consumers should research and evaluate purchases, such as appliances and automobiles. Even something like purchasing a daily cup of coffee could entail looking at nutritional values and making a decision based on a calorie count. It is difficult to find an example of information not being a part of daily life.

There is a multitude of library mission and instruction program statements available on the web. Here are three which exemplify best practice:

Source: http://library.hunter.cuny.edu/services/information literacymission, accessed 5 August 2012.

Library Instruction Mission Statement – Hunter College

The Library Instruction Program serves the students, faculty and staff of Hunter College. The purpose of the Program is to assist members of the college community with developing information-seeking abilities appropriate for their individual levels of scholarship and to support their research. Through this program, the library facilitates access to the vast resources available to the College, fosters a sense of independence and responsibility and encourages a collaborative relationship with Reference/Instruction faculty. The classroom faculty is invited to work with library instruction faculty in designing course-related assignments that include information literacy objectives. The Library Instruction Program incorporates teaching strategies and methodologies that respond to individual differences in learning including level, style and culture.

Source: http://www.uncp.edu/tlc/sacs/SACS_Report/submission/documents/1144.pdf, accessed 5 August 2012.

Information Literacy Mission – University of North Carolina at Pembroke

As part of the mission of the University of North Carolina at Pembroke, the Mary Livermore Library plays a critical role in supporting the university’s commitment to academic excellence. As such, the Library’s Instructional Services area promotes information literacy through a formal instruction program that aims to provide the university’s campus community with the skills needed to find, retrieve, analyze and use information. Instructional Services offers students, faculty and staff a structured approach towards learning information literacy skills that is hands-on and curriculum based.

Source: https://library.sjsu.edu/instructional-services/our-instructional-program, accessed 5 August 2012.

San Jose State University Library – Our Instructional Program – Mission

The campus Library and Information Literacy Instruction Program collaborates with faculty, instructional staff and campus administrators to pursue its primary mission: to make information literacy integral to the SJSU experience through curriculum-focused instruction, provision of instructional materials and faculty-librarian cooperation. We work with you to ensure that students in on-campus, hybrid and distance courses and programs develop the skills, attitudes and knowledge that they need in order to become efficient, effective users and producers of information. This essential set of abilities will not only give students some of the critical tools necessary for success in their academic careers at the university, but also help them prepare for a lifetime of learning.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978184334705750002X

The Human Intrusion Detection System

Ira Winkler, Araceli Treu Gomes, in Advanced Persistent Security, 2017

Perform Positive Outreach

One of the best things any security team can do is create a positive relationship with the organization as a whole. The better the relationship, the more likely people are to detect potential security incidents, and especially report those incidents.

When people feel a reasonable connection with the security staff, they will better engage with them. Anything that can be done to ensure that the security department does not just appear when there is a problem will improve the willingness of people to approach the security team when something goes wrong.

There are many ways to instill such goodwill. For the purposes of this section, we will focus on two primary methods: providing useful information and outreach. Regarding providing useful information, the security team should provide awareness information that helps individuals to secure their home, family and personal resources. This gives people the belief that the organization cares about them, and provides a useful benefit back to them as well. It helps to foster a sense of both belonging and responsibility to the well being of the organization.

When preparing security awareness programs for some organizations, we find that there are two type of organizations: those that want to provide information relevant to employees for personal and business purposes, and those that state all information provided should be specific to workplace security. We find that the organizations that state that they are interested in protecting information at home tend to have better overall security.

For example, just like safe drivers will drive as safely in company cars as they will in their personal automobiles, people who practice safe computing practices at home will likely have safer computing practices at work. Minimally consider that employees who fall victim to computer related incidents in their private lives will be distracted at work, and will need to spend time to clear up the resulting problems. So we always advise that organizations should reconsider any limitations on their awareness programs.

Topics that are specific to home use include how to protect children on the Internet, how to securely configure your home network, and protecting your cellphone. Clearly, any topic that appears not to have a work related aspect to it will be welcomed by individuals for what it is. Employees appreciate any information they can apply in their personal lives.

The type of resources to distribute can include newsletters, tip cards, handouts, and more creative materials. For example, you can provide mobile device security kits, which may include privacy shields, tip cards for securing mobile devices, and subscriptions to anti-malware software for mobile devices. Many companies also give away annual subscriptions to anti-malware subscriptions for personal computers and laptops. All of these materials demonstrate an interest in the individual's well being.

Additionally, the security team should hold events as a form of outreach. These events can be as traditional as booths in public areas to hand out information, holding contests, showing movies that could have a security related theme and bribing them to attend with popcorn, holding lunch and learn briefings for employees, providing briefings to different departments within the organization to highlight concerns specific to those departments, among any other creative endeavors. Remember, food always helps.

Any time you can engage with employees or other insiders, in a way that does not involve an incident or confrontation, makes those people more willing to seek you out when there is an incident. For this reason alone, the security department should encourage its staff to engage in non-work related committees and efforts, so that people have more opportunities to engage with the staff and have ready access to the security team. This is similar to the Police Athletic League, where police departments set up athletic opportunities for children, so that the children develop positive feelings toward police in general, but also get to know some police officers personally, so that they trust them and know they can go to them should they ever have a problem.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128093160000154

Conscious and Unconscious Processes in Cognition

A. Cleeremans, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 Methodological Challenges

Because there is no accepted operational definition of what it means for an agent to be conscious of something, complex measurement challenges arise in the study of the relationships between conscious and unconscious cognition.

First, consciousness is not a single process or phenomenon, but rather encompasses many dimensions of experience. A first important challenge thus arises in delineating which aspects of consciousness count when assessing whether a subject is aware or not of a particular piece of information: awareness of the presence or absence of a stimulus, conscious memory of a specific previous processing episode, awareness of one's intention to use some information, or awareness that one's behavior is influenced by some previous processing episode. Different aspects of conscious processing are engaged by different paradigms. In subliminal perception studies, for instance, one is concerned with determining whether stimuli that have not been consciously encoded can influence subsequent responses. In contrast, implicit memory research has been more focused on retrieval processes, that is, on the unintentional, automatic effects that previously consciously perceived stimuli can exert on subsequent decisions. In studies of implicit learning, it is the relationships between ensembles of consciously processed stimuli that remain purportedly unconscious. These subtle differences in which specific aspects of the situation are available to awareness illustrate the need to distinguish carefully between awareness during encoding and awareness during retrieval of information. Further, both encoding and retrieval can concern either individual stimuli or relationships between sets of stimuli, and both can either be intentional or not.

A second important challenge is to devise an appropriate measure of awareness. Most experimental paradigms dedicated to exploring the relationships between conscious and unconscious processing have relied on a simple dissociation logic aimed at comparing the sensitivity of two different measures to some relevant information: a measure C of subjects' awareness of the information, and a measure P of behavioral sensitivity to the same information in the context of some task. Unconscious processing, according to the simple dissociation logic, is then demonstrated whenever P exhibits sensitivity to some information in the absence of correlated sensitivity in C. There are several potential pitfalls with the simple dissociation logic, however. First, the measures C and P cannot typically be obtained concurrently. This ‘retrospective assessment’ problem entails that finding that C fails to be sensitive to the relevant information need not necessarily imply that information was processed unconsciously during encoding, but that, for instance, it might have been forgotten before retrieval. A second issue is to ensure that the information revealed through C is indeed relevant to performing the task. As Shanks and St. John (1994) have suggested, many studies of implicit learning have failed to respect this ‘information’ criterion. For instance, successful classification in an artificial grammar learning task need not necessarily be based on knowledge of the rules of the grammar, but can instead involve knowledge of the similarity relationships between training and test items. Subjects asked about the rules of the grammar would then understandably fail to offer relevant explicit knowledge.

A third issue is to ensure that C and P are both equally sensitive to the relevant information. At first sight, verbal reports and other subjective measures such as confidence ratings would appear to offer the most direct way of assessing the contents of subjective experience, but such measures are often difficult to operationalize in a sufficiently controlled manner. For instance, people might simply refrain from reporting on knowledge held with low confidence, or might offer reports that are essentially reconstructive in nature, as Nisbett and Wilson's experiments indicate. For this reason, many authors have advocated using so-called objective measures of awareness. Objective measures of awareness include forced-choice tests such as recognition, presence–absence decisions, or identification.

Even if these different conditions are fulfilled, however, it might be elusive to hope to obtain measures of awareness that are simultaneously exclusive and exhaustive with respect to knowledge held consciously. In other words, finding null sensitivity in C, as required by the dissociation paradigms for unconscious processing to be demonstrated, might simply be impossible because no such absolute measure exists. A significant implication of this conclusion is that, at least with normal participants, it makes little sense to assume that conditions exist in which awareness can simply be ‘turned off.’ Much of the ongoing debate about the existence of subliminal perception can be attributed to a failure to recognize the limitations of the dissociation logic.

It might therefore instead be more plausible to assume that any task is always sensitive to both conscious and unconscious influences. In other words, no task is process-pure. Two methodological approaches that attempt specifically to overcome the conceptual limitations of the dissociation logic have been developed. The first was introduced by Reingold and Merikle (1988), who suggested that the search for absolute measures of awareness should simply be abandoned in favor of approaches that seek to compare the sensitivity of direct measures and indirect measures of some discrimination. Direct measures involve tasks in which the instructions make explicit reference to the relevant discrimination, and include objective measures such as recognition or recall. In contrast, indirect measures, such as stem completion in memory tasks, make no reference to the relevant discrimination. By assumption, direct measures should exhibit greater sensitivity than, or equal sensitivity to, indirect measures to consciously held task-relevant information, for subjects would be expected to be more successful in using conscious information when instructed to do so than when not. Hence, demonstrating that an indirect task is more sensitive to some information than a comparable direct task can only be interpreted as indicating unconscious influences on performance.

The second approach—Larry Jacoby's ‘Process Dissociation Procedure’ (Jacoby 1991)—constitutes one of the most significant advances in the study of differences between implicit and explicit memory. It is based on the argument that, just as direct measures can be contaminated by unconscious influences, indirect measures can likewise be contaminated by conscious influences: particular tasks can simply not be identified with particular underlying processes. The process dissociation procedure thus aims to tease apart the relative contributions of conscious and unconscious influences on performance. To do so, two conditions are compared in which conscious and unconscious influences either both contribute to performance improvement, or act against each other. For instance, subjects might be asked to memorize a list of words and then, after some delay, to perform a stem completion task in which word stems are to be completed either so as to form one of the words memorized earlier (the inclusion condition) or so as to form a different word (the exclusion condition). If the stems nevertheless tend to be completed by memorized words under exclusion instructions, then one can only conclude that memory for these words was implicit, since if subjects had been able consciously to recollect them, they would have avoided using them to complete the stems. Numerous experiments have now been designed using the process dissociation procedure. Collectively, they offer convincing evidence that performance can be influenced by unconscious information in the absence of conscious, subjective awareness.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767035609

What is an awareness and understanding of a set of information and ways that information can be made useful to support a specific task or reach a decision?

Knowledge. is the awareness and understanding of a set of information and the ways that information can be made useful to support a specific task or reach a decision.

Is the awareness and understanding of a set of information and the ways that information can be?

Knowledge is the awareness and understanding of a set of information and ways that information can be made useful to support a specific task or arrive at a decision.

Which definition describes the information age?

The Information Age is the idea that access to and the control of information is the defining characteristic of this current era in human civilization.

Which one of the following is the study of the nature and origin of knowledge what it means to know?

epistemology, the philosophical study of the nature, origin, and limits of human knowledge. The term is derived from the Greek epistēmē (“knowledge”) and logos (“reason”), and accordingly the field is sometimes referred to as the theory of knowledge.