When multiple participants are interviewed at the same time this is called?

The conversations that can arise in a focus group can help overcome many of the shortcomings of interviews. In a one-to-one setting the interviewer and interviewee are left to fend for themselves. If the interviewee is not talkative, or if an awkward dynamic stifles the discussion, the interview may fail. Group discussions support interactivity, with participants ideally balancing each other. Participants can encourage each other to speak up, either in support of or opposition to earlier statements. This highly dynamic situation can stimulate participants to raise issues that they might not have identified in one-to-one interviews.

As the rigidity of a fully structured interview is ill suited for group settings, focus groups are generally semistructured or unstructured. A fully structured focus group would require asking each question to each individual in order, without any room for interaction between participants. A fully structured focus group would essentially be equivalent to multiple individual interviews conducted simultaneously.

Interactive focus groups present researchers with several logistical and management challenges. As conversation takes time, focus groups might be limited to a relatively small number of questions—fewer than you would cover in comparable interviews. Conflicts may arise, particularly in focus groups involving controversial topics. Participants may be unwilling to discuss topics involving potentially sensitive information—perhaps relating to health care or finances—in a group setting. Individual interviews might be more appropriate for discussion of these topics.

Particularly talkative and opinionated participants can monopolize conversations, crowding out other viewpoints. If this happens, you will need to find a diplomatic way to ask chatterboxes to yield the floor. Simply cutting them off brusquely may give offense and discourage further participation. Disrespectful conduct can cause similar problems. When conducting a focus group, you must be careful to avoid power struggles or other confrontations with participants, as such battles can sabotage the whole process (Brown, 1999).

Group dynamics can impose certain limits on the extent to which you can generalize from focus group results. Although you'll know when people disagree strongly enough to speak up, you may not know how to interpret silence. Participants who sit quietly may agree with expressed opinions or they may simply be opting out of the conversation.

Extracting useful data from a focus group requires skillful facilitation. You need to manage personality conflicts, encourage participation from all participants, keep the conversation going, monitor the clock, and work through your list of questions, all the while collecting the data that is at the heart of your effort. With a roomful of participants to manage, this can be quite a challenge. Fortunately, this need not fall on only one person's shoulders. A focus group might have two moderators: someone who is skilled in running such groups can work alongside an HCI researcher who is familiar with the problem at hand (Brown, 1999). Together, these collaborators can work together to ensure successful data collection.

The selection of focus group participants can be an art in itself. Should your participants represent multiple backgrounds and perspectives, or would a more homogenous group be appropriate? What about familiarity—do you want participants who are unknown to each other or groups consisting of friends or colleagues? Participants in homogenous groups have common backgrounds and experiences that may help promote discussion and exchange, giving you viewpoints that represent this shared context. In some cases, you may not be able to find a broadly diverse group of participants. If you are developing a system for use by a narrowly defined group of experts—such as brain surgeons or HCI researchers—your groups are likely to be largely homogenous, at least in the relevant respects.1 Homogenous groups have the disadvantage of narrowing the range of perspectives. For projects that aim to support a broad range of users—for example, systems aimed at meeting the needs of all patrons in a large metropolitan library—broadly based focus groups representing multiple viewpoints may be more helpful. Groups that are too diverse may pose a different set of problems, as a lack of any common ground or shared perspectives may make conversation difficult (Krueger, 1994). In any case, participants in focus groups should have an interest in the topic at hand and they should be willing to participate constructively (Brown, 1999).

Focus groups may be inappropriate for addressing sensitive or controversial topics. Many participants may be reluctant to discuss deeply personal issues in a group setting. Controversial topics may lead to arguments and bitterness that could destroy the group's effectiveness (Krueger, 1994). Although such concerns may seem unrelated to much HCI work, group discussions can take on a life of their own, possibly bringing you unanticipated difficulties. If you have any concerns at hand about difficult issues, you may decide to use one-to-one interviews instead.

Although most focus groups are at least somewhat unstructured, structured focus group techniques can be useful for building group consensus on topics of common interest. The Nominal Group Technique (NGT) (Delbecq and Van de Ven, 1971) asks users to answer a specific question. Participants start by writing individual responses to the question, which are then provided to a moderator and discussed with the group. Participants then prioritize their “top 5” responses, and a ranked tally is generated to identify the most important consensus responses to the question at hand (Centers for Disease Control, 2006). An NGT inquiry into the information needs of home-care nurses and their unmet information needs in dealing with geriatric patients after hospital discharge asked participants “In your experience, what information-related problems have your elderly patients experienced that contributed to hospital readmissions?” Respondents identified 28 different needs in six different categories, including medication, disease/condition, nonmedication care, functional limitations, and communication problems (Romagnoli et al., 2013).

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012805390400008X

Focus Groups

Kathy Baxter, ... Peter McNally, in Understanding your Users (Second Edition), 2015

Task-Based Focus Groups

In task-based focus groups, participants are presented with a task (or scenario) and asked to complete it with a prototype or the actual product. If your product is a software or web application, this will obviously require several computers. After participants have completed their task(s), they are brought together to discuss their experiences.

It is best to give the participants the same set of core tasks, so they can share common experiences. For example, in one study, participants were asked to look up information in a user’s manual and describe how confident they were that the answer they gave was right (Hackos & Redish, 1998). Similar focus groups have been conducted for car owner’s manuals, appliance manuals, and telephone bills (Dumas & Redish, 1999). Keep in mind, however, that a focus group is not a substitute for a usability test.

Similarly, you can present participants with multiple activities or artifacts during a single focus group. Participants may start off with a brief group discussion, but then, they work individually for the majority of the session. Multiple facilitators are needed for this activity. Each facilitator works with a participant one-on-one and completes a different activity (e.g., brainstorming new ways of solving a problem, viewing a prototype). After all the participants have gone through all of the activities, the group reconvenes to discuss experiences. Ideally, all participants will have completed all of the activities.

An excellent example of this methodology began with each of five focus groups photographing participants holding phone handsets to assess gripping styles (Dolan, Wiklund, Logan, & Augaitis, 1995). Participants then rank-ordered six conventionally-designed handsets and six progressively-designed handsets according to several ergonomic and emotional attributes (after having experience with each). Finally, participants critiqued the handset designs according to personal preference, and each built a clay model of his or her ideal handset. By giving participants exposure to a wide range of experiences with the potential product or domain area, participants do not need to “imagine” what a product would be like. They can discuss their actual experiences with the product or domain and determine whether or not the product would support their desired outcomes or goals. The activities can also spark new ideas for you to draw upon.

The same type of questions can be asked in task-based and nontask-based focus groups. The benefit of task-based focus groups is that follow-up discussions are richer when participants have the opportunity to actually use the product than when they must simply imagine it or remember when they used it last. With this technique, participants can reference the tasks they just completed to provide concrete examples, or the tasks may trigger previous experiences with the product.

The cost of doing such a study is that you must have several sets of your product available so that participants can work with them simultaneously. You will also need several facilitators to help, and the prep time to develop such a focus group will be longer than for traditional focus groups, since you need to create the materials for several activities. It can also be more expensive because more materials are needed. However, the added benefit of giving participants something to experience and work with is worth the cost and should be done whenever possible!

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128002322000122

FOCUS GROUPS

Catherine Courage, Kathy Baxter, in Understanding Your Users, 2005

Task-based Focus Groups

In task-based focus groups, participants are presented with a task (or scenario) and asked to complete it with a prototype or the actual product. Several participants may be working in the same room at the same time but they should be working individually. If your product is software or a web application, this will obviously require several computers. After participants have completed their task(s), they are brought together to discuss their experiences.

It is best to give the participants the same set of core tasks so they can share common experiences. For example, in one study, participants were asked to look up information in a user's manual and describe how confident they were that the answer they gave was right (Hackos & Redish 1998). Similar focus groups have been conducted for car owner's manuals, appliance manuals, and telephone bills (Dumas & Redish 1999). Keep in mind, however, that a focus group is not a substitute for a usability test.

Similarly, you can present participants with multiple activities or artifacts during a single focus group. Participants may start off with a brief group discussion but then they work individually for the majority of the session. Multiple facilitators are needed for this activity. Each facilitator works with a participant one-on-one and completes a different activity (e.g., brainstorming new ways of solving a problem, view a prototype). After all the participants have gone through all of the activities, the group reconvenes to discuss experiences. Ideally, all participants will have completed all of the activities.

An excellent example of this methodology began with each of five focus groups photographing participants holding phone handsets to assess gripping styles (Dolan, Wiklund, Logan, & Augaitis 1995). Participants then rank-ordered six conventionally designed handsets and six progressively designed handsets according to several ergonomic and emotional attributes (after having experience with each). Finally, participants critiqued the handset designs according to personal preference, and each built a clay model of his/her ideal handset. By giving participants exposure to a wide range of experiences with the potential product or domain area, participants do not need to “imagine” what a product would be like. They can discuss their actual experiences with the product or domain and determine whether or not the product would support their desired outcomes or goals. The activities can also spark new ideas for you to draw upon.

The same type of questions can be asked in task-based and non task-based focus groups. The benefit of task-based focus groups is that follow-up discussions are richer when participants have the opportunity to actually use the product than when they must simply imagine it or remember when they used it last. With this technique, participants can reference the tasks they just completed to provide concrete examples, or the tasks may trigger previous experiences with the product.

The cost of doing such a study is that you must have several sets of your product available so that participants can work with them simultaneously. You will also need several facilitators to help, and the prep time to develop such a focus group will be longer than for traditional focus groups since you need to create the materials for several activities. It can also be more expensive because more materials are needed. However, the added benefit of giving participants something to experience and work with is worth the cost and should be done whenever possible!

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558609358500427

Investigating Students’ Feelings and Their Perspectives Toward Web 2.0 Technologies in a Teacher Education Course

Yungwei Hao, in Emotions, Technology, Design, and Learning, 2016

Interviews

Focus-group interviews elicited information from the 12 focus-group participants about their perspectives on the Web 2.0 tools. The 12 participants, a part of the 57 students, voluntarily took part in the study. Group interviews can facilitate personal disclosure (Farquhar, 1999). The group of participants met with the research team on a single occasion on campus, and the discussion lasted for approximately 3.5 h. The informal group discussions were audio-recorded and transcribed. Interviews were semi-structured with open-ended questions, and the interviewees reflected on their experiences using the Web 2.0 tools in the course. Sample interview questions included: “What and how did you feel while using [the technology]?”; “What do you think about [the technology]? Why?”; “How did [the technology] help you complete the course?”; and “What problem(s) did you have using [the technology] in the course?”

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128018569000128

What are virtual walls to flow of knowledge in teamwork discussions?

Vichita Vathanophas, Suphong Chirawattanakij, in Technology and Knowledge Flow, 2011

Stage of emotion

Emotion is the mediator between environment as an input and behaviour as an output (Scherer, 1994). Environmental effects are believed to impact the knowledge-sharing behaviour in the meeting room. Some FG participants indicated the uncomfortable conditions in a meeting room as the antecedence of stress and bad emotion. Thus, the knowledge-sharing behaviour is interrupted. Moreover, the seemingly immaterial matters such as erratic or insufficient sleeping patterns or drinking too much coffee can also affect the emotions, and thus, the sharing of knowledge in a teamwork discussion.

Determination problems in teamwork discussion into the four-dimensional aspect can enhance efficiency of the discussion. Persistent problems will be analysed separately and appropriate remedies will be provided. This four-dimensional facet, rather than improving data flow efficiency in team discussion, can be expanded to cope with the socialisation process of knowledge generation. The next section manifests the relationship between this four-dimensional facet and the Japanese Ba concept.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843346463500046

Focus Groups

Chauncey Wilson, in Interview Techniques for UX Practitioners, 2014

Online Focus Groups

Online focus groups are useful for getting to participants who are geographically dispersed and can reduce travel costs. There are a variety of online focus group technologies including the following:

Telephone focus groups. In a telephone focus group, participants are invited to join in a group telephone discussion on a specific date and time. Participants get confirmation letters or e-mails. The participants are called the day of the focus group as a reminder. Each person is welcomed as he or she joins. If clients are remote, they can send e-mails or even fax questions that they want the moderator to ask the phone participants. Telephone interviews require very experienced moderators.

Web applications. A moderator can communicate with both clients and participants using chat areas and whiteboards for presenting information.

Chat lines. Hass (2004), for example, used a chat application to conduct focus groups with hearing-disabled users. Chat logs provided the data for a qualitative analysis.

Bulletin boards and blogs. Threaded bulletin boards or blogs are a possible method for collecting opinions, attitudes, perceptions, and even experiences if a prototype was made available.

Online focus group services. These services use proprietary web technologies that conduct focus groups for clients.

Mann and Stewart (2000) discuss some of the general issues with Internet interviewing and online focus groups. Key issues include the following:

Interpretations of pauses. Pauses during an online chat can be intended by the moderator as a clue that he or she is “listening” to the participant. However, the pause might be perceived as inattention or distraction. The same pause in face-to-face focus groups might be viewed as an “attentive pause” that allows people to comment on the current topic. The lack of face-to-face contact can change the meaning of pauses.

Establishing rapport. You can start to establish rapport before the online focus group by posting a welcome message that will set the tone for the session. You might ask online participants to introduce themselves and provide a description of their current surroundings or a brief note about their experience with the topic of interest.

Ground rules for participation. Ground rules are perhaps more important for online focus groups than face-to-face focus groups because you don’t have visual or spoken ways to retain control over the session; however, too much emphasis on ground rules can set a negative tone for participants and inhibit conversations.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124103931000053

Marketing Industry

Maria Anne Skaates, in Encyclopedia of Social Measurement, 2005

Operationalization of Research Questions

Research questions such as the ones presented in Figs. 1 and 2 have to be operationalized, i.e., reformulated to allow for measurement using secondary data and/or primary data (including communication with the persons whose behavior and opinions are to be examined by the marketing researchers via surveys, focus groups, participant observation, or interviews). In connection with this communication, it is paramount that the persons whose behavior and attitudes will be examined understand the questions posed in the same way as the marketing researchers believe they are understood. This can be especially problematic for marketing researchers if cross-cultural data are to be collected and interpreted. Especially if the marketing researchers do not have sufficient knowledge of foreign market and foreign customers, there are risks of ethnocentrism and poor translation equivalence in the formulation of research questions and in the interpretation of responses. Methodologies from anthropology, communications, and linguistics may, however, be used by marketers to minimize ethnocentrism and translation problems in marketing research.

Ethical issues, such as ensuring anonymity and confidentiality of respondents and taking into consideration issues concerning their personal risks, consent, and privacy, are also considered by marketers in connection with the operationalization of research questions as well as with the rest of the marketing research process. In connection with these issues, marketers are subject to substantial national and supranational (e.g., European Union) variations in data protection legislation, and the marketing research industry's own rules, recommendations, and norms also vary from country to country.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985002486

GIS Methods and Techniques

Sven Fuhrmann, in Comprehensive Geographic Information Systems, 2018

1.30.1 Introduction

A young start-up company has acquired a sizable project to develop a mobile Geographic Information System App for environmental monitoring. The App will be used by citizens to document and monitor environmental changes over a longer period of time and run on several operating systems and devices. The project is almost completed. The project manager asks the development team if they expect any delays in publishing the first version to the stakeholders and the public. The software development manager responds that she and her team expects to be on time and might have even resources to conduct a focus group to solicit general user feedback. However, time and funding limits might require to select focus group participants from the development group. The project manager signs off on this suggestion and instructs the public relations team to start advertising the release date in the media and to the investors. How do you think this story will end?

Over the last four decades the term user-centered design (UCD) has been used to generally illustrate design processes that are influenced by users and their tasks (Abras et al., 2004). The term originated in the mid-1980s through researchers in the human–computer interaction domain, most prominently through Norman and Draper (1986). Norman, then a researcher at the University of California San Diego published several books on the topic of UCD. “The Design of Everyday Things” (Norman, 1988) became a best-selling publication describing how design generally serves as a communication vehicle between objects and users. In addition Norman (1988) provides several guidelines on how to optimize a design to make the experience of using an object intuitive, useful, and enjoyable. It took several years until the UCD approach made its way into Geographic Information System (GIS) development. The desire for a better and more productive user experience was mostly driven by new emerging graphical user interfaces, the need for customized GIS applications, and a larger and more diverse user group. In the early 1990s, Medyckyj-Scott and Hearnshaw (1993) and Davies and Medyckyj-Scott (1996) started to discuss and describe cognitive abilities, conceptual GIS use models, and the need for user and performance studies. Their initial work and the resulting research questions have laid a foundation for past and current research and development in user-centered geoinformation technology design.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124095489096081

Planning

Tom Tullis, Bill Albert, in Measuring the User Experience (Second Edition), 2013

3.4 Evaluation Methods

One of the great features of collecting UX metrics is that you’re not restricted to a certain type of evaluation method (e.g., lab test, online test). Metrics can be collected using almost any kind of evaluation method. This may be surprising because there is a common misperception that metrics can only be collected through large-scale online studies. As you will see, this is simply not the case. Choosing an evaluation method to collect metrics boils down to how many participants are needed and what metrics you’re going to use.

3.4.1 Traditional (Moderated) Usability Tests

The most common usability method is a lab test that utilizes a relatively small number of participants (typically 5 to 10). The lab test involves a one-on-one session between a moderator (usability specialist) and a test participant. The moderator asks questions of the participants and gives them a set of tasks to perform on the product in question. The test participant is likely to be thinking aloud as she performs the various tasks. The moderator records the participant’s behavior and responses to questions. Lab tests are used most often in formative studies where the goal is to make iterative design improvements. The most important metrics to collect are about issues, including issue frequency, type, and severity. Also, collecting performance data such as task success, errors, and efficiency may also be helpful.

Self-reported metrics can also be collected by having participants answer questions regarding each task or at the conclusion of the study. However, we recommend that you approach performance data and self-reported data very carefully because it’s easy to overgeneralize the results to a larger population without an adequate sample size. In fact, we typically only report the frequency of successful tasks or errors. We hesitate even to state the data as a percentage for fear that someone (who is less familiar with usability data or methods) will overgeneralize the data.

Usability tests are not always run with a small number of participants. In some situations, such as comparison tests, you might want to spend some extra time and money by running a larger group of participants (perhaps 10–50 users). The main advantage of running a test with more participants is that as your sample size increases, so does your confidence in your data. Also, this will afford you the ability to collect a wider range of data. In fact, all performance, self-reported, and physiological metrics are fair game. But there are a few metrics that you should be cautious about. For example, inferring website traffic patterns from usability-lab data is probably not very reliable, nor is looking at how subtle design changes might impact the user experience. In these cases, it is better to test with hundreds or even thousands of participants in an online study.

Focus Groups versus Usability Tests

When some people first hear about usability testing, they believe it is the same as a focus group. But in our experience, the similarity between the two methods begins and ends with the fact that they both involve representative participants. In a focus group, participants commonly watch someone demonstrate or describe a potential product and then react to it. In a usability test, participants actually try to use some version of the product themselves. We’ve seen many cases where a prototype got rave reviews from focus groups and then failed miserably in a usability test.

3.4.2 Online (Unmoderated) Usability Tests

Online studies involve testing with many participants at the same time. It’s an excellent way to collect a lot of usability data in a relatively short amount of time from users who are dispersed geographically. Online studies are usually set up similarly to a lab test in that there are some background or screener questions, tasks, and follow-up questions. Participants go through a predefined script of questions and tasks, and all their data are collected automatically. You can collect a wide range of data, including many performance metrics and self-reported metrics. It may be difficult to collect issues-based data because you’re not observing participants directly. But the performance and self-reported data can point to issues, and verbatim comments can help infer their causes. Albert, Tullis, and Tedesco (2010) go into detail about how to plan, design, launch, and analyze an online usability study.

Unlike other methods, online usability studies provide the researcher a tremendous amount of flexibility in the amount and type of data they collect. Online usability studies can be used to collect both qualitative and quantitative data and can focus on either user attitudes or behaviors (see Figure 3.1). The focus of an online study depends largely on the project goals and is rarely limited by the type or amount of data collected. While online studies are an excellent way to collect data, it is less ideal when the UX researcher is trying to gainer a deeper insight into the users’ behaviors and motivations.

When multiple participants are interviewed at the same time this is called?

Figure 3.1. How online usability testing tools fit with other common user research methods.

Online usability tools come in many different flavors; however, there are a few different types of tools that each specialize in a different aspect of the user experience. Figure 3.2 shows the breakdown of different types of online testing tools. These tools are changing constantly, with new ones becoming available every day, with many new features and functionality.

When multiple participants are interviewed at the same time this is called?

Figure 3.2. A breakdown of the different types of online (unmoderated) testing tools.

Quantitative-based tools focus on data collection. They typically are set up to collect data from 100+ participants and provide some very nice analytical and reporting functions.

Full-service tools such as Keynote’s WebEffective, Imperium, and Webnographer provide a complete range of features and functionality for carrying out any type of online study, along with support from a team of experts to design an online study and perhaps help with the analysis.

Self-service tools include Loop11, UserZoom, and UTE. These tools provide a tremendous amount of functionality to the researcher, with minimal support from the vendor. These tools are increasingly becoming more powerful and easy to use, with low-cost options.

Card-sorting/IA tools help the researcher collect data about how users think about and organize information. Tools such as OptimalSort, TreeJack, and WebSort are very useful, easy to set up, and affordable.

Surveys are increasingly become useful to the UX researcher. Tools such as Qualtrics, SurveyGizmo, and SurveyMonkey let the researcher embed images into the survey and collect a wide variety of self-reported metrics, along with other useful click metrics.

Click/Mouse tools such as Chalkmark, Usabilla, ClickTale, and FiveSecondTest let the researcher click data about where users click on a web page or how they move their mouse around. These tools are useful for testing the awareness of key features, intuitiveness of the navigation, or what just grab users’ attention the most.

Qualitative-based online tools are designed to collect data from a small number of participants who are interacting with a product. These tools are extremely helpful for gaining insight into the nature of the problems that users encounter, as well as provide direction on possible design solutions. There are different types of qualitative-based tools.

Video tools such as UserTesting.com, Userlytics, and OpenHallway allow you to collect a rich set of qualitative data about the users’ experience in using a product in the form of a video file. Observing these videos lets the researcher collect performance metrics, and possibly self-reported metrics, depending on the capabilities of the tool.

Reporting tools provide the user with an actual report that is typically a list of verbatim comments from users about their experience in using the product. The metrics may be limited, but it is certainly possible to do text analysis of the feedback, looking for common trends or patterns in data.

Expert review tools such as Concept Feedback provide the user researcher with feedback from a group of “experts” about a product’s design and usability. While the feedback is typically qualitative in nature, the researcher might also collect self-reported metrics from each reviewer.

Which One Goes First? Lab or Online Test?

We often get questions about which should go first, a traditional lab study, followed by an online study, or vice versa. There are some pretty strong arguments for both sides.

Lab first, then onlineOnline first, then labIdentify/fix “low-hanging fruit”and then focus on remaining tasks with large sample sizeIdentify the most significant issues online through metrics and then use lab study to gather deeper qualitative understanding of those issuesGenerate new concepts, ideas, or questions through lab testing and then test/validate onlineCollect video clips or more quotes of users to help bring metrics to lifeValidate attitudes/preferences observed in lab testingGather all the metrics to validate design—if it tests well, then no need to bring users into the lab

3.4.3 Online Surveys

Many UX researchers think of online surveys strictly for collecting data about preferences and attitudes, and firmly in the camp of market researchers. This is no longer the case. For example, many online survey tools allow you to include images, such as a prototype design, within the body of the survey. Including images within a survey will allow you to collect feedback on visual appeal, page layout, perceived ease of use, and likelihood to use, to name just a few metrics. We have found online surveys to be a quick and easy way to compare different types of visual designs, measure satisfaction with different web pages, and even preferences for various types of navigation schemes. As long as you don’t require your participants to interact with the product directly, an online survey may suit your needs.

The main drawback of online surveys is that the data received from each participant are somewhat limited, but that may be offset by the larger number of participants. So, depending on your goals, an online survey tool may be a viable option.

What is it called when you interview a group of people?

Panel interview This is by far the most common type of group interview format. In this format, interviewees are interviewed by a group (or panel).

In which type of interview are candidates interviewed at the same time?

Group interview definition It can mean that a number of candidates are interviewed together at the same time (known as candidate group interview) or that one candidate is interviewed by more different department representatives at the same time (known as panel group interview).

What is a collective interview?

As its name suggests, the collective interview consists of interviewing several candidates at the same time. Moreover, the collective interview is also called “group interview”. The number of candidates invited to a collective interview may vary depending on the number of positions to be filled.