.

Secret Intelligence Service

.

Research, Information, Analysis and Methodology 

Notes

.

The Intelligence Service employs analysts to break down and understand a wide array of questions. Intelligence agencies may use heuristics, inductive and deductive reasoning, social network analysis, dynamic network analysis, link analysis, and brainstorming to sort through problems they face. Military intelligence may explore issues through the use of game theory, Red Teaming, and war gaming. Signals intelligence applies cryptanalysis and frequency analysis to break codes and ciphers. Business intelligence applies theories of competitive intelligence analysis and competitor analysis to resolve questions in the marketplace. Law enforcement intelligence applies a number of theories in crime analysis.

HEURISTICS :  refers to experience-based techniques for problem solving, learning, and discovery that gives a solution which is not guaranteed to be optimal. Where the exhaustive search is impractical, heuristic methods are used to speed up the process of finding a satisfactory solution via mental shortcuts to ease the cognitive load of making a decision. Examples of this method include using a rule of thumb, an educated guess, an intuitive judgment, stereotyping, or common sense.

In more precise terms, heuristics are strategies using readily accessible, though loosely applicable, information to control problem solving in human beings and machines.

We can operate within what he calls bounded rationality. ? coined the term ‘satisficing, which denotes the situation where people seek solutions or accept choices or judgments that are “good enough” for their purposes, but could be optimized

PSYCHOLOGICAL HEURISTICS

Anchoring and adjustment – Describes the common human tendency to rely too heavily on the first piece of information offered (the “anchor”) when making decisions. For example, in a study done with children, the children were told to estimate the number of jellybeans in a jar. Groups of children were given either a high or low “base” number (anchor). Children estimated the number of jellybeans to be closer to the anchor number that they were given.

Availability heuristic – A mental shortcut that occurs when people make judgments about the probability of events by the ease with which examples come to mind. For example, in a 1973 Tversky & Kahneman experiment, the majority of participants reported that there were more words in the English language that start with the letter K than for which K was the third letter. There are actually twice as many words in the English Language that have K as the third letter than start with K, but words that start with K are much easier to recall and bring to mind.

Representativeness heuristic – A mental shortcut used when making judgments about the probability of an event under uncertainty. Or, judging a situation based on how similar the prospects are to the prototypes the person holds in his or her mind. For example, in a 1982 Tversky and Kahneman experiment, participants were given a description of a woman named Linda. Based on the description, it was likely that Linda was a feminist

80-90% of participants responded that it was also more likely for Linda to be a feminist and a bank teller than just a bank teller. The likelihood of two events cannot be greater than that of either of the two events individually. For this reason, the Representativeness Heuristic is exemplary of the Conjunction Fallacy. .

Naïve diversification – When asked to make several choices at once, people tend to diversify more than when making the same type of decision sequentially.

Escalation of commitment – Describes the phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the cost, starting today, of continuing the decision outweighs the expected benefit.

Familiarity heuristic – A mental shortcut applied to various situations in which individuals assume that the circumstances underlying the past behavior still hold true for the present situation and that the past behavior thus can be correctly applied to the new situation. Especially prevalent when the individual experiences a high cognitive load.

HEURISTIC DEVICE :  is used when an entity X exists to enable understanding of, or knowledge concerning, some other entity Y. A good example is a model that, as it is never identical with what it models, is a heuristic device to enable understanding of what it models. Stories, metaphors, etc., can also be termed heuristic in that sense. A classic example is the notion of utopia as described in Plato’s best-known work, The Republic. This means that the ‘ideal city’ as depicted in The Republic is not given as something to be pursued, or to present an orientation-point for development; rather, it shows how things would have to be connected, and how one thing would lead to another (often with highly problematic results), if one would opt for certain principles and carry them through rigorously.

STEREOTYPING :  is a type of heuristic that all people use to form opinions or make judgments about things they have never seen or experienced. They work as a mental shortcut to access everything from social status of a person from their actions to assumptions that a plant that is tall has a trunk and leaves, is a tree even though we have never seen that particular type of tree before.

Stereotypes as described by Lippman are the pictures we have in our heads which are built around experiences as well as what we are told about the world.

INDUCTIVE LOGIC (as opposed to deductive logic) is reasoning in which the premises seek to supply strong evidence for (not absolute proof of) the truth of the conclusion. While the conclusion of a deductive argument is supposed to be certain, the truth of an inductive argument is supposed to be probable, based upon the evidence given.

Inductive reasoning forms the basis of most scientific theories eg; Darwinism, Big bang theory and Einsteins Relativity theory.

Inductive reasoning is inherently uncertain. It only deals in degrees to which, given the premises, the conclusion is credible according to some theory of evidence

Almost all people are taller than 55 inches

Olga is a person

Therefore, Olga is almost certainly taller than 55 inches

DEDUCTIVE LOGIC, also deductive reasoning or logical deduction or, informally, top down logic, is the process of reasoning from one or more general statements / premises to reach a logically certain conclusion.

Deductive reasoning links premises with conclusions. If all premises are true, the terms are clear, and the rules of deductive logic are followed, then the conclusion reached is necessarily  true.

An example of a deductive argument:

All women are mortal.

Sherlock Holmes is a woman.

Therefore, Sherlock Holmes is mortal.

The first premise states that all objects classified as “men” have the attribute “mortal”. The second premise states that “Sherlock Holmes” is classified as a “woman” – a member of the set “women”. The conclusion then states that “Sherlock Holmes” must be “mortal” because he inherits this attribute from his classification as a “woman”

SOCIAL NETWORK ANALYSIS  is the analysis of social networks. Social network analysis views social relationships in terms of network theory, consisting of nodes (representing individual actors within the network) and ties (which represent relationships between the individuals, such as friendship, kinship, organizations, sexual relationships, etc). These networks are often depicted in a social network diagram, where nodes are represented as points and ties are represented as lines.

Social network analysis has its theoretical roots in the work of early sociologists such as Georg Simmel and Emile Durkheim, who wrote about the importance of studying patterns of relationships that connect social actors. Social scientists have used the concept of ‘social networks’ since early in the 20th century to connote complex sets of relationships between members of social systems at all scales, from interpersonal to international.

In 1954, J. A. Barnes started using the term systematically to denote patterns of ties, encompassing concepts traditionally used by the public and those used by social scientists: bounded groups (e.g., tribes, families) and social categories (e.g., gender, ethnicity). Scholars such as Ronald Burt, Kathleen Carley, Mark Granovetter, David Krackhardt, Edward Laumann, Barry Wellman, Douglas R. White, and Harrison White expanded the use of systematic social network analysis. Even in the study of literature, network analysis has been applied by Anheier, Gerhards and Romo, Wouter De Nooy, and Burgert Senekal. Indeed, social network analysis has found applications in various academic disciplines, as well as practical applications such as countering money laundering and terrorism.

CONNECTIONS

HOMOPHILY : The extent to which actors form ties with similar versus dissimilar others. Similarity can be defined by gender, race, age, occupation, educational achievement, status, values or any other salient characteristic. Homophily is also referred to as assortavity.

MULTIPLEXY : The number of content-forms contained in a tie. For example, two people who are friends and also work together would have a multiplexity of 2. Multiplexity has been associated with relationship strength.

MUTUALITY / RECIPROCITY : The extent to which two actors reciprocate each other’s friendship or other interaction.

NETWORK CLOSURE : A measure of the completeness of relational triads. An individual’s assumption of network closure (i.e. that their friends are also friends) is called transitivity. Transitivity is an outcome of the individual or situational trait of Need for Cognitive Closurre.

PROPINQUITY : The tendency for actors to have more ties with geographically close others.

DISTRIBUTIONS

BRIDGE: An individual whose weak ties fill a structural hole, providing the only link between two individuals or clusters. It also includes the shortest route when a longer one is unfeasible due to a high risk of message distortion or delivery failure.

CENTRALITY : Centrality refers to a group of metrics that aim to quantify the ‘importance’ or ‘influence’ (in a variety of senses) of a particular node (or group) within a network.  Examples of common methods of measuring ‘centrality’ include betweenness centrality,  closeness centrality, eigenvector centrality, alpha centrality and degree centrality.

DENSITY : The proportion of direct ties in a network relative to the total number possible.

DISTANCE: The minimum number of ties required to connect two particular actors, as popularized by Stanley Milgram’s small world experiment and the idea of ‘six degrees of separation’.

STRUCTURAL HOLES: The absence of ties between two parts of a network. Finding and exploiting a structural hole can give an entrepreneur a competitive advantage. This concept was developed by sociologist Ronald Burt, and is sometimes referred to as an alternate conception of social capital.

TIE STRENGTH: Defined by the linear combination of time, emotional intensity, intimacy and reciprocity (i.e. mutuality). Strong ties are associated with homophily, propinquity and transitivity, while weak ties are associated with bridges.

SEGMENTATION

Groups are identified as ‘cliques’ if every individual is directly tied to every other individual, ‘social circles’ if there is less stringency of direct contact, which is imprecise, or as structurally cohesive blocks if precision is wanted.[25]

CLUSTERING COEFFICIENT : A measure of the likelihood that two associates of a node are associates. A higher clustering coefficient indicates a greater ‘cliquishness.

COHESION: The degree to which actors are connected directly to each other by cohesive bonds. Structural cohesion refers to the minimum number of members who, if removed from a group, would disconnect the group.

Modelling and Visualisation of Networks

Visual representation of social networks is important to understand the network data and convey the result of the analysis. Numerous methods of visualization for data produced by Social Network Analysis have been presented Many of the analytic software have modules for network visualisation. Exploration of the data is done through displaying nodes and ties in various layouts, and attributing colors, size and other advanced properties to nodes. Visual representations of networks may be a powerful method for conveying complex information, but care should be taken in interpreting node and graph properties from visual displays alone, as they may misrepresent structural properties better captured through quantitative analyses.

Collaboration graphs can be used to illustrate good and bad relationships between humans. A positive edge between two nodes denotes a positive relationship (friendship, alliance, dating) and a negative edge between two nodes denotes a negative relationship (hatred, anger). Signed social network graphs can be used to predict the future evolution of the graph. In signed social networks, there is the concept of ‘balanced’ and ‘unbalanced’ cycles. A balanced cycle is defined as a cycle where the product of all the signs are positive. Balanced graphs represent a group of people who are unlikely to change their opinions of the other people in the group. Unbalanced graphs represent a group of people who are very likely to change their opinions of the people in their group. For example, a group of 3 people (A, B, and C) where A and B have a positive relationship, B and C have a positive relationship, but C and A have a negative relationship is an unbalanced cycle. This group is very likely to morph into a balanced cycle, such as one where B only has a good relationship with A, and both A and B have a negative relationship with C. By using the concept of balances and unbalanced cycles, the evolution of signed social network graphs can be predicted.

Especially when using social network analysis as a tool for facilitating change, different approaches of participatory network mapping have proven useful. Here participants / interviewers provide network data by actually mapping out the network (with pen and paper or digitally) during the data collection session. An example of a pen-and-paper network mapping approach, which also includes the collection of some actor attributes (perceived influence and goals of actors) is the * Net-map toolbox. One benefit of this approach is that it allows researchers to collect qualitative data and ask clarifying questions while the network data is collected.

APPLICATIONS

Social network analysis is used extensively in wide range of applications and disciplines. Some common network analysis applications include data aggregation and mining, network propagation modeling, network modeling and sampling, user attribute and behavior analysis, community-maintained resource support, location-based interaction analysis, social sharing and filtering, recommender systems development, and link prediction and entity resolution. In the private sector, businesses use social network analysis to support activities such as customer interaction and analysis, marketing, and business intelligence needs. Some public sector uses include development of leader engagement strategies, analysis of individual and group engagement and media use, and community-based problem solving.

Social network analysis is also used in intelligence, counter-intelligence and law enforcement activities. This technique allows the analysts to map a clandestine or covert organization such as a espionage ring, an organized crime family or a street gang. The UK Intelligence Services use clandestine mass electronic surveillance programs to generate the data needed to perform this type of analysis on terrorist cells and other networks deemed relevant to national security. The NSA looks up to three nodes deep during this network analysis. After the initial mapping of the social network is complete, analysis is performed to determine the structure of the network and determine, for example, the leaders within the network.This allows military or law enforcement assets to launch capture-or-kill decapitation attacks on the high-value targets in leadership positions to disrupt the functioning of the network.

DYNAMIC NETWORK ANALYSIS is an emergent scientific field that brings together traditional social network analysis (SNA), link analysis (LA) and multi-agent systems (MAS) within network science and network theory. There are two aspects of this field. The first is the statistical analysis of DNA data. The second is the utilization of simulation to address issues of network dynamics. DNA networks vary from traditional social networks in that they are larger, dynamic, multi-mode, multi-plex networks, and may contain varying levels of uncertainty. The main difference of DNA to SNA is that DNA takes interactions of social features conditioning structure and behavior of networks into account. DNA is tied to temporal analysis but temporal analysis is not necessarily tied to DNA, as changes in networks sometimes result from external factors which are independent of social features found in networks. One of the most notable and earliest of cases in the use of DNA is in Sampson’s monastery study, where he took snapshots of the same network from different intervals and observed and analyzed the evolution of the network.

DNA statistical tools are generally optimized for large-scale networks and admit the analysis of multiple networks simultaneously in which, there are multiple types of nodes (multi-node) and multiple types of links (multi-plex). In contrast, SNA statistical tools focus on single or at most two mode data and facilitate the analysis of only one type of link at a time.

DNA statistical tools tend to provide more measures to the user, because they have measures that use data drawn from multiple networks simultaneously. Latent space models (Sarkar and Moore, 2005)[ and agent-based simulation are often used to examine dynamic social networks (Carley et al, 2009). From a computer simulation perspective, nodes in DNA are like atoms in quantum theory, nodes can be, though need not be, treated as probabilistic. Whereas nodes in a traditional SNA model are static, nodes in a DNA model have the ability to learn. Properties change over time; nodes can adapt: A company’s employees can learn new skills and increase their value to the network; or, capture one terrorist and three more are forced to improvise. Change propagates from one node to the next and so on. DNA adds the element of a network’s evolution and considers the circumstances under which change is likely to occur.

There are three main features to dynamic network analysis that distinguish it from standard social network analysis. First, rather than just using social networks, DNA looks at meta-networks. Second, agent-based modeling and other forms of simulations are often used to explore how networks evolve and adapt as well as the impact of interventions on those networks. Third, the links in the network are not binary; in fact, in many cases they represent the probability that there is a link.

META NETWORK : is a multi-mode, multi-link, multi-level network. Multi-mode means that there are many types of nodes; e.g., nodes people and locations. Multi-link means that there are many types of links; e.g., friendship and advice. Multi-level means that some nodes may be members of other nodes, such as a network composed of people and organisations and one of the links is who is a member of which organisation.

While different researchers use different modes, common modes reflect who, what, when, where, why and how. A simple example of a meta-network is the PCANS formulation with people, tasks, and resources. A more detailed formulation considers people, tasks, resources, knowledge, and organizations.

THE DNA NETWORK – TYPICAL ISSUES WORKED ON

Developing metrics and statistics to assess and identify change within and across networks.

Developing and validating simulations to study network change, evolution, adaptation, decay.

Developing and testing theory of network change, evolution, adaptation, decay.

Developing and validating formal models of network generation and evolution

Developing techniques to visualize network change overall or at the node or group level

Developing statistical techniques to see whether differences observed over time in networks are due to simply different samples from a distribution of links and nodes or changes over time in the underlying distribution of links and nodes

Developing control processes for networks over time

Developing algorithms to change distributions of links in networks over time

Developing algorithms to track groups in networks over time

Developing tools to extract or locate networks from various data sources such as texts

Developing statistically valid measurements on networks over time

Examining the robustness of network metrics under various types of missing data

Empirical studies of multi-mode multi-link multi-time period networks

Examining networks as probabilistic time-variant phenomena

Forecasting change in existing networks

Identifying trails through time given a sequence of networks

Identifying changes in node criticality given a sequence of networks anything else related to multi-mode multi-link multi-time period networks

Studying random walks on temporal networks.

Quantifying structural properties of contact sequences in dynamic networks, which influence dynamical processes

A SOCIAL NETWORK is a social structure made up of a set of social actors (such as individuals or organizations) and a set of the dyadic ties between these actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities as well as a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine network dynamics.

Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and “web of group affiliations’’. Jacob Moreno is credited with developing the first is sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in

contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science.

The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored  although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organisational studies, social psychology, sociology, and sociolinguistics.

LEVELS OF ANALYSIS

In general, social networks are self-organising, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system.These patterns become more apparent as network size increases. However, a global network analysis[35] of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher’s theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level.

MICRO LEVEL

At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context.

Dyadic level : A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality.

Triadic level : Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality.

Actor level : The smallest unit of analysis in a social network is an individual in their social setting, i.e., an “actor” or “ego”. Egonetwork analysis focuses on network characteristics such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals.

Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behaviour.

MESO LEVEL

In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks.

Organisations : Formal organisations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organisational or inter-organizational ties in terms of formal or informal relationships. Intra-organisational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a workgroup level and organisation level, focusing on the interplay between the two structures.

Randomly distributed networks : Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, and homophily attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behaviour.

Scale-free networks : A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called “hubs”, and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabasi model of network evolution shown above is an example of a scale-free network.

MACRO LEVEL

Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population.

Large-scale networks : Large-scale network is a term somewhat synonymous with “macro-level” as used, primarily, in social and behavioural sciences, in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping).

Complex networks : Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortavity or disassortativity among vertices, community structure, and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features.

.

BRAINSTORMING

BRAINSTORMING :  is a group or individual creativity technique by which efforts are made to find a conclusion for a specific problem by gathering a list of ideas spontaneously contributed by its member(s). The term was popularized by Alex Faickney Osborn in the 1953 book Applied Imagination. Osborn claimed that brainstorming was more effective than individuals working alone in generating ideas, although more recent research has questioned this conclusion. Today, the term is used as a catch all for all group ideation sessions.

Origin

Advertising executive Alex F. Osborn began developing methods for creative problem solving in 1939. He was frustrated by employees’ inability to develop creative ideas individually for ad campaigns. In response, he began hosting group-thinking sessions and discovered a significant improvement in the quality and quantity of ideas produced by employees. Osborn outlined the method in his 1948 book, ‘Your Creative Power’ on chapter 33, “How to Organise a Squad to Create Ideas.

OSBORN : BRAINBSTORMING METHOD

Osborn claimed that two principles contribute to ‘ideative efficacy,’ these being :

To defer judgment

To reach for quantity

Following these two principles were his four general rules of brainstorming, established with intention to :

reduce social inhibitions among group members,

stimulate idea generation

increase overall creativity of the group.

Focus on quantity : This rule is a means of enhancing divergent production, aiming to facilitate problem solving through the maxim quantity breeds quality. The assumption is that the greater the number of ideas generated, the greater the chance of producing a radical and effective solution.

Withhold criticism : In brainstorming, criticism of ideas generated should be put ‘on hold’. Instead, participants should focus on extending or adding to ideas, reserving criticism for a later ‘critical stage’ of the process. By suspending judgment, participants will feel free to generate unusual ideas.

Welcome unusual ideas : To get a good and long list of ideas, unusual ideas are welcomed. They can be generated by looking from new perspectives and suspending assumptions. These new ways of thinking may provide better solutions.

Combine and improve ideas : Good ideas may be combined to form a single better good idea, as suggested by the slogan “1+1=3”. It is believed to stimulate the building of ideas by a process of association.

Applications

Osborn notes that brainstorming should address a specific question; he held that sessions addressing multiple questions were inefficient.

Further, the problem must require the generation of ideas rather than judgment; he uses examples such as generating possible names for a product as proper brainstorming material, whereas analytical judgments such as whether or not to marry do not have any need for brainstorming.

Groups

Osborn envisioned groups of around 12 participants, including both experts and novices. Participants are encouraged to provide wild and unexpected answers. Ideas receive no criticism or discussion. The group simply provides ideas that might lead to a solution and apply no analytical judgement as to the feasibility. The judgements are reserved for a later date.

VARIATIONS

Nominal group technique

Participants are asked to write their ideas anonymously. Then the facilitator collects the ideas and the group votes on each idea. The vote can be as simple as a show of hands in favor of a given idea. This process is called distillation.

After distillation, the top ranked ideas may be sent back to the group or to subgroups for further brainstorming. For example, one group may work on the color required in a product. Another group may work on the size, and so forth. Each group will come back to the whole group for ranking the listed ideas. Sometimes ideas that were previously dropped may be brought forward again once the group has re-evaluated the ideas.

It is important that the facilitator be trained in this process before attempting to facilitate this technique. The group should be primed and encouraged to embrace the process. Like all team efforts, it may take a few practice sessions to train the team in the method before tackling the important ideas.

Group passing technique

Each person in a circular group writes down one idea, and then passes the piece of paper to the next person, who adds some thoughts. This continues until everybody gets his or her original piece of paper back. By this time, it is likely that the group will have extensively elaborated on each idea.

The group may also create an ‘idea book’ and post a distribution list or routing slip to the front of the book. On the first page is a description of the problem. The first person to receive the book lists his or her ideas and then routes the book to the next person on the distribution list. The second person can log new ideas or add to the ideas of the previous person. This continues until the distribution list is exhausted. A follow-up “read out” meeting is then held to discuss the ideas logged in the book. This technique takes longer, but it allows individuals time to think deeply about the problem.

Team idea mapping method

This method of brainstorming works by the method of association. It may improve collaboration and increase the quantity of ideas, and is designed so that all attendees participate and no ideas are rejected.

The process begins with a well-defined topic. Each participant brainstorms individually, then all the ideas are merged onto one large idea map. During this consolidation phase, participants may discover a common understanding of the issues as they share the meanings behind their ideas. During this sharing, new ideas may arise by the association, and they are added to the map as well. Once all the ideas are captured, the group can prioritize and/or take action.

Directed brainstorming

Directed brainstorming is a variation of electronic brainstorming (described above). It can be done manually or with computers. Directed brainstorming works when the solution space (that is, the set of criteria for evaluating a good idea) is known prior to the session. If known, those criteria can be used to constrain the Ideation process intentionally.

In directed brainstorming, each participant is given one sheet of paper (or electronic form) and told the brainstorming question. They are asked to produce one response and stop, then all of the papers (or forms) are randomly swapped among the participants. The participants are asked to look at the idea they received and to create a new idea that improves on that idea based on the initial criteria. The forms are then swapped again and respondents are asked to improve upon the ideas, and the process is repeated for three or more rounds.

In the laboratory, directed brainstorming has been found to almost triple the productivity of groups over electronic brainstorming.

Guided brainstorming

A guided brainstorming session is time set aside to brainstorm either individually or as a collective group about a particular subject under the constraints of perspective and time. This type of brainstorming removes all cause for conflict and constrains conversations while stimulating critical and creative thinking in an engaging, balanced environment.

Participants are asked to adopt different mindsets for pre-defined period of time while contributing their ideas to a central mind map drawn by a pre-appointed scribe. Having examined a multi-perspective point of view, participants seemingly see the simple solutions that collectively create greater growth. Action is assigned individually.

Following a guided brainstorming session participants emerge with ideas ranked for further brainstorming, research and questions remaining unanswered and a prioritized, assigned, actionable list that leaves everyone with a clear understanding of what needs to happen next and the ability to visualize the combined future focus and greater goals of the group.

Individual brainstorming

“Individual brainstorming” is the use of brainstorming in solitary. It typically includes such techniques as free writing, free speaking, word association, and drawing a mind map, which is a visual note taking technique in which people diagram their thoughts. Individual brainstorming is a useful method in creative writing and has been shown to be superior to traditional group brainstorming.

Research has shown individual brainstorming to be more effective in idea-generation than group brainstorming.

Question brainstorming

This process involves brainstorming the questions, rather than trying to come up with immediate answers and short term solutions. Theoretically, this technique should not inhibit participation as there is no need to provide solutions. The answers to the questions form the framework for constructing future action plans. Once the list of questions is set, it may be necessary to prioritise them to reach to the best solution in an orderly way.

Questorming is another term for this mode of inquiry.

 

COMPUTER ASSISTED BRAINSTORMING

Brainstorming software and Electronic meeting system

Although the brainstorming can take place online through commonly available technologies such as email or interactive web sites, there have also been many efforts to develop customized computer software that can either replace or enhance one or more manual elements of the brainstorming process.

When using electronic meeting systems (EMS, as they came to be referred), group members simultaneously and independently entered ideas into a computer terminal. The software collected (or pools) the ideas into a list, which could be displayed on a central projection screen (anonymised if desired). Other elements of these EMSs could support additional activities such as categorization of ideas, elimination of duplicates, assessment and discussion of prioritized or controversial ideas. Later EMSs capitalised on advances in computer networking and internet protocols to support asynchronous brainstorming sessions over extended periods of time and in multiple locations

Proponents such as Gallupe et al. argue that electronic brainstorming eliminates many of the problems of standard brainstorming, including production blocking (i.e. group members must take turns to express their ideas) and evaluation apprehension (i.e. fear of being judged by others). This positive effect increases with larger groups. A perceived advantage of this format is that all ideas can be archived electronically in their original form, and then retrieved later for further thought and discussion. Electronic brainstorming also enables much larger groups to brainstorm on a topic than would normally be productive in a traditional brainstorming session.

Computer supported brainstorming may overcome some of the challenges faced by traditional brainstorming methods. For example, ideas might be “pooled” automatically, so that individuals do not need to wait to take a turn, as in verbal brainstorming. Some software programs show all ideas as they are generated (via chat room or e-mail). The display of ideas may cognitively stimulate brainstormers, as their attention is kept on the flow of ideas being generated without the potential distraction of social cues such as facial expressions and verbal language. Electronic brainstorming techniques have been shown to produce more ideas and help individuals focus their attention on the ideas of others better than a brainwriting technique (participants write individual written notes in silence and then subsequently communicate them with the group). The production of more ideas has been linked to the fact that paying attention to others’ ideas leads to non-redundancy, as brainstormers try to avoid to replicate or repeat another participant’s comment or idea.

Web-based brainstorming techniques allow contributors to post their comments anonymously through the use of avatars. This technique also allows users to log on over an extended time period, typically one or two weeks, to allow participants some “soak time” before posting their ideas and feedback. This technique has been used particularly in the field of new product development, but can be applied in any number of areas requiring collection and evaluation of ideas.

Some limitations of EBS include the fact that it can flood people with too many ideas at one time that they have to attend to, and people may also compare their performance to others by analyzing how many ideas each individual produces (social matching).

MILITARY APPLICATIONS

GAME THEORY : is a study of strategic decision making. Specifically, it is “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers” An alternative term suggested “as a more descriptive name for the discipline” is interactive decision theory. Game theory is mainly used in economics, political science, and psychology, as well as logic and biology. The subject first addressed zero-sum games, such that one person’s gains exactly equal net losses of the other participant(s). Today, however, game theory applies to a wide range of behavioral relations, and has developed into an umbrella term for the logical side of decision science, to include both human and non-humans, like computers.

Modern game theory began with the idea regarding the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann’s original proof used Brouwer’s fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by his 1944 book Theory of Games and Economic Behaviuor, with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.

This theory was developed extensively in the 1950s by many scholars. Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. Eight game-theorists have won the Nobel Memorial Prize in Economic Sciences, and John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology.

RED TEAM : is an independent group that challenges an organisation to improve its effectiveness. The intelligence community (military and civilian) has red teams that explore alternative futures and so on.

 Private businesses and as said, intelligence bodies (MI5, MI6, GCHQ)  have long used Red Teams.

Penetration testers assess organization security, often unbeknownst to client staff. This type of Red Team provides a more realistic picture of the security readiness than exercises, role playing or announced assessments. The Red Team may trigger active controls and countermeasures within a given operational environment.

In wargaming the opposing force (or OPFOR) in a simulated military conflict may be referred to as a red cell (a very narrow form of Red Teaming) and may also engage in red team activity. The key theme is that the aggressor is composed of various threat actors, equipment and techniques that are at least partially unknown by the defenders. The red cell challenges the operations planning by playing the role of a thinking enemy. In United States war-gaming simulations, the U.S. force is always the Blue Team and the opposing force is always the Red Team.

When applied to intelligence work, red-teaming is sometimes called alternative analysis.[2]

When used in a hacking context, a RED TEAM is a group of white-hat hackers that attack an organization’s digital infrastructure as an attacker would in order to test the organization’s defenses (often known as penetration testing)

Benefits include challenges to preconceived notions and clarifying the problem state that planners are attempting to mitigate. A more accurate understanding can of how sensitive information is externalized, as well as exploitable patterns and instances of bias

Army

Red  teaming can be defined as a structured, iterative process executed by trained, educated and practiced team members that provides commanders an independent capability to continuously challenge plans, operations, concepts, organisations and capabilities in the context of the operational environment and from our partners’ and adversaries’ perspectives.

Red Team Leaders are or should be proficient in:

Analysing complex systems and problems from different perspectives to aid in decision making, using models of theory.

Employing concepts, theories, insights, tools and methodologies of cultural and military anthropology to predict other’s perceptions of our strengths and vulnerabilities.

Applying critical and creative thinking in the context of the operational environment to fully explore alternatives to plans, operations, concepts, organisations and capabilities.

Applying advanced analytical skills and techniques at tactical through strategic levels and developing products supporting command decision making and execution.

Elsewhere

Red teaming is normally associated with assessing vulnerabilities and limitations of systems or structures. Various watchdog agencies employ red teaming. Red teaming refers to the work performed to provide an adversarial perspective, especially when this perspective includes plausible tactics, techniques, and procedures as well as realistic policy and doctrine.

Military simulations, also known informally as war games, are simulations in which theories of warfare can be tested and refined without the need for actual hostilities. Many professional analysts object to the term wargames as this is generally taken to be referring to the civilian hobby, thus the preference for the term simulation.

Simulations exist in many different forms, with varying degrees of realism. In recent times, the scope of simulations has widened to include not only military but also political and social factors, which are seen as inextricably entwined in a realistic warfare model.

Whilst many governments make use of simulation, both individually and collaboratively, little is known about it outside professional circles. Yet modelling is often the means by which governments test and refine their military and political policies. Military simulations are seen as a useful way to develop tactical, strategical and doctrinal solutions, but critics argue that the conclusions drawn from such models are inherently flawed, due to the approximate nature of the models used.

Military Simulations range from field exercises through computer simulations to analytical models; the realism of live manoeuvres is countered by the economy of abstract simulations.

The term ‘military simulation’ can cover a wide spectrum of activities, ranging from full-scale field-exercises, to abstract computerised models that can proceed with little or no human involvement.

As a general scientific principle, the most reliable data comes from actual observation and the most reliable theories depend on it. This also holds true in military analysis, where analysts look towards live field-exercises and trials as providing data likely to be realistic (depending on the realism of the exercise) and verifiable (it has been gathered by actual observation). One can readily discover, for example, how long it takes to construct a bridge under given conditions with given manpower, and this data can then generate norms for expected performance under similar conditions in the future, or serve to refine the bridge-building process. Any form of training can be regarded as a “simulation” in the strictest sense of the word (inasmuch as it simulates an operational environment); however, many if not most exercises take place not to test new ideas or models, but to provide the participants with the skills to operate within existing ones.

Full-scale military exercises, or even smaller-scale ones, are not always feasible or even desirable. Availability of resources, including money, is a significant factor — it costs a lot to release troops and material from any standing commitments, to transport them to a suitable location, and then to cover additional expenses such as petroleum, oil and lubricants usage, equipment maintenance, supplies and consumables replenishment and other items. In addition, certain warfare models do not lend themselves to verification using this realistic method. It might, for example, prove counter-productive to accurately test an attrition scenario by killing one’s own troops.

Moving away from the field exercise, it is often more convenient to test a theory by reducing the level of personnel involvement. Map exercises can be conducted involving senior officers and planners, but without the need to physically move around any troops. These retain some human input, and thus can still reflect to some extent the human imponderables that make warfare so challenging to model, with the advantage of reduced costs and increased accessibility. A map exercise can also be conducted with far less forward planning than a full-scale deployment, making it an attractive option for more minor simulations that would not merit anything larger, as well as for very major operations where cost, or secrecy, is an issue.

Increasing the level of abstraction still further, simulation moves towards an environment readily recognized by civilian wargamers. This type of simulation can be manual, implying no (or very little) computer involvement, computer-assisted, or fully computerised.

Graf Helmuth von Moltke is nowadays regarded as the grandfather of modern military simulation. Although not the inventor of Kriegsspiel, he was greatly impressed by it as a young officer, and as Chief of Staff of the Prussian Army promoted its use as a training aid.

Manual simulations have probably been in use in some form since mankind first went to war. Chess can be regarded as a form of military simulation (although its precise origins are debated). In more recent times, the forerunner of modern simulations was the Prussian game Kriegsspiel, which appeared around 1811 and is sometimes credited with the Prussian victory in the Franco-Prussian War. It was distributed to each Prussian regiment and they were ordered to play it regularly, prompting a visiting German officer to declare in 1824, “It’s not a game at all! It’s training for war!” Eventually so many rules sprang up, as each regiment improvised their own variations, two versions came into use. One, known as “rigid Kriegsspiel“, was played by strict adherence to the lengthy rule book. The other, “free Kriegsspiel“, was governed by the decisions of human umpires. Each version had its advantages and disadvantages: rigid Kriegsspiel contained rules covering most situations, and the rules were derived from historical battles where those same situations had occurred, making the simulation verifiable and rooted in observable data, which some later American models discarded. However, its prescriptive nature acted against any impulse of the participants towards free and creative thinking. Conversely, free Kriegsspiel could encourage this type of thinking, as its rules were open to interpretation by umpires and could be adapted during operation. This very interpretation, though, tended to negate the verifiable nature of the simulation, as different umpires might well adjudge the same situation in different ways, especially where there was a lack of historical precedent. In addition, it allowed umpires to weight the outcome, consciously or otherwise.

The above arguments are still cogent in the modern, computer-heavy military simulation environment. There remains a recognised place for umpires as arbiters of a simulation, hence the persistence of manual simulations in war colleges throughout the world. Both computer-assisted and entirely computerised simulations are common as well, with each being used as required by circumstances.

Addendum

The Rand Corporation is one of the best known designers of Military Simulations for the US Government and Air Force, and one of the pioneers of the Political-Military simulation.Their SAFE (Strategic And Force Evaluation) simulation is an example of a manual simulation, with one or more teams of up to ten participants being sequestered in separate rooms and their moves being overseen by an independent director and his staff. Such simulations may be conducted over a few days (thus requiring commitment from the participants): an initial scenario (for example, a conflict breaking out in the Persian Gulf) is presented to the players with appropriate historical, political and military background information. They then have a set amount of time to discuss and formulate a strategy, with input from the directors/umpires (often called Control) as required. Where more than one team is participating, teams may be divided on partisan lines — traditionally Blue and Red are used as designations, with Blue representing the ‘home’ nation and Red the opposition. In this case, the teams will work against each other, their moves and counter-moves being relayed to their opponents by Control, who will also adjudicate on the results of such moves. At set intervals, Control will declare a change in the scenario, usually of a period of days or weeks, and present the evolving situation to the teams based on their reading of how it might develop as a result of the moves made. For example, Blue Team might decide to respond to the Gulf conflict by moving a carrier battle group into the area whilst simultaneously using diplomatic channels to avert hostilities. Red Team, on the other hand, might decide to offer military aid to one side or another, perhaps seeing an opportunity to gain influence in the region and counter Blue’s initiatives. At this point Control could declare a week has now passed, and present an updated scenario to the players: possibly the situation has deteriorated further and Blue must now decide if they wish to pursue the military option, or alternatively tensions might have eased and the onus now lies on Red as to whether to escalate by providing more direct aid to their clients.

Computer-assisted simulations are really just a development of the manual simulation, and again there are different variants on the theme. Sometimes the computer assistance will be nothing more than a database to help umpires keep track of information during a manual simulation. At other times one or other of the teams might be replaced by a computer-simulated opponent (known as an agent or automaton). This can reduce the umpires’ role to interpreter of the data produced by the agent, or obviate the need for an umpire altogether. Most commercial wargames designed to run on computers (such as Blitzkrieg, the Total War series and even the Civilisation games) fall into this category.

Where agents replace both human teams, the simulation can become fully computerised and can, with minimal supervision, run by itself. The main advantage of this is the ready accessibility of the simulation — beyond the time required to program and update the computer models, no special requirements are necessary. A fully computerised simulation can run at virtually any time and in almost any location, the only equipment needed being a laptop computer. There is no need to juggle schedules to suit busy participants, acquire suitable facilities and arrange for their use, or obtain security clearances. An additional important advantage is the ability to perform many hundreds or even thousands of iterations in the time that it would take a manual simulation to run once. This means statistical information can be gleaned from such a model; outcomes can be quoted in terms of probabilities, and plans developed accordingly.

Removing the human element entirely means the results of the simulation are only as good as the model itself. Validation thus becomes extremely significant — data must be correct, and must be handled correctly by the model: the modeller’s assumptions (“rules”) must adequately reflect reality, or the results will be nonsense. Various mathematical formulae have been devised over the years to attempt to predict everything from the effect of casualties on morale to the speed of movement of an army in difficult terrain. One of the best known is the Lanchester Square Law formulated by the British engineer Frederick Lanchester in 1914. He expressed the fighting strength of a (then) modern force as proportional to the square of its numerical strength multiplied by the fighting value of its individual units.The Lanchester Law is often known as the attrition model, as it can be applied to show the balance between opposing forces as one side or the other loses numerical strength

SIGINT

Cryptanalysis (from the Greek kryptós, “hidden”, and analýein, “to loosen” or “to untie”) is the study of analysing information systems in order to study the hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown.

In addition to mathematical analysis of cryptographic algorithms, cryptanalysis also includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation.

Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems often involve solving carefully constructed problems in pure mathematics, the best-known being integer factorisation.

Overview

Given some encrypted data  – cyphertext, the goal of the cryptanalyst is to gain as much information as possible about the original, unencrypted data (“plaintext”).

Amount of information available to the attacker

Attacks can be classified based on what type of information the attacker has available. As a basic starting point it is normally assumed that, for the purposes of analysis, the general algorithm is known; this is Shannon’s Maxim “the enemy knows the system”, in its turn, equivalent to Kerckhoffs’ principle. This is a reasonable assumption in practice — throughout history, there are countless examples of secret algorithms falling into wider knowledge, variously through espionage, betrayal and reverse engineering. (And on occasion, ciphers have been reconstructed through pure deduction; for example, the German Lorenz cipher and the Japanese Purple code, and a variety of classical schemes) :

Ciphertext-only: the cryptanalyst has access only to a collection of cyphertexts or codetexts.

Known-plaintext: the attacker has a set of ciphertexts to which he knows the corresponding plaintext.

Chosen-plaintext (chosen-ciphertext): the attacker can obtain the ciphertexts (plaintexts) corresponding to an arbitrary set of plaintexts (ciphertexts) of his own choosing.

Adaptive chosen-plaintext: like a chosen-plaintext attack, except the attacker can choose subsequent plaintexts based on information learned from previous encryptions. Similarly Adaptive chosen ciphertext attack.

Related-key attack: Like a chosen-plaintext attack, except the attacker can obtain ciphertexts encrypted under two different keys. The keys are unknown, but the relationship between them is known; for example, two keys that differ in the one bit.

Computational resources needed :

Attacks can also be characterised by the resources they require. Those resources include :

Time — the number of computation steps (like encryptions) which must be performed.

Memory — the amount of storage required to perform the attack.

Data — the quantity of plaintexts and ciphertexts required.

It is sometimes difficult to predict these quantities precisely, especially when the attack isn’t practical to actually implement for testing. But academic cryptanalysts tend to provide at least the estimated order of magnitude of their attacks’ difficulty, saying, for example, “SHA-1 collisions now 252.

Bruce Schneier notes that even computationally impractical attacks can be considered breaks: “Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute force. Never mind that brute-force might require 2128 encryptions; an attack requiring 2110 encryptions would be considered a break…simply put, a break can just be a certificational weakness: evidence that the cipher does not perform as advertised.”

Partial breaks

The results of cryptanalysis can also vary in usefulness. For example, cryptographer Lars Knudsen (1998) classified various types of attack on block ciphers according to the amount and quality of secret information that was discovered:

Total break — the attacker deduces the secret key.

Global deduction — the attacker discovers a functionally equivalent algorithm for encryption and decryption, but without learning the key.

Instance (local) deduction — the attacker discovers additional plaintexts (or ciphertexts) not previously known.

Information deduction — the attacker gains some Shannon information about plaintexts (or ciphertexts) not previously known.

Distinguishing algorithm — the attacker can distinguish the cipher from a random permutation.

Academic attacks are often against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially more difficult to execute as rounds are added to a cryptosystem, so it’s possible for the full cryptosystem to be strong even though reduced-round variants are weak. Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow; the successful attacks on DES, MD5, and SHA-1 were all preceded by attacks on weakened versions.

In academic cryptography, a weakness or a break in a scheme is usually defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts. It also might require the attacker be able to do things many real-world attackers can’t: for example, the attacker may need to choose particular plaintexts to be encrypted or even to ask for plaintexts to be encrypted using several keys related to the secret key. Furthermore, it might only reveal a small amount of information, enough to prove the cryptosystem imperfect but too little to be useful to real-world attackers. Finally, an attack might only apply to a weakened version of cryptographic tools, like a reduced-round block cipher, as a step towards breaking of the full system

In cryptanalysis, frequency analysis is the study of the frequency of letters or groups of letters in cyphertexta. The method is used as an aid to breaking classical ciphers.

Frequency analysis : is based on the fact that, in any given stretch of written language, certain letters and combinations of letters occur with varying frequencies. Moreover, there is a characteristic distribution of letters that is roughly the same for almost all samples of that language. For instance, given a section of English language, E, T, A and O are the most common, while Z, Q and X are rare. Likewise, TH, ER, ON, and AN are the most common pairs of letters (termed bigrams or digraphs), and SS, EE, TT, and FF are the most common repeats. The nonsense phrase ‘ETAOIN SHRDLU’ represents the 12 most frequent letters in typical English language text.

In some ciphers, such properties of the natural language plaintext are preserved in the ciphertext, and these patterns have the potential to be exploited in a ciphertext-only attack.

Frequency analysis for simple substitution ciphers

In a simple substitution cipher, each letter of the plaintext is replaced with another, and any particular letter in the plaintext will always be transformed into the same letter in the ciphertext. For instance, if all occurrences of the letter e turn into the letter X, a ciphertext message containing numerous instances of the letter X would suggest to a cryptanalyst that X represents e.

The basic use of frequency analysis is to first count the frequency of ciphertext letters and then associate guessed plaintext letters with them. More X’s in the ciphertext than anything else suggests that X corresponds to e in the plaintext, but this is not certain; t and a are also very common in English, so X might be either of them also. It is unlikely to be a plaintext z or q which are less common. Thus the cryptanalyst may need to try several combinations of mappings between ciphertext and plaintext letters.

More complex use of statistics can be conceived, such as considering counts of pairs of letters (digrams), triplets (trigrams), and so on. This is done to provide more information to the cryptanalyst, for instance, Q and U nearly always occur together in that order in English, even though Q itself is rare.

.

.

UNIT

Secret Intelligence Service

Research, Information, Analysis and Methodology 

Notes

.

.

Adversitate. Custodi. Per Verum