Principal Investigator's Guide, Chapter 3: Choosing an Evaluator: Matching Project Needs with Evaluator Skills and Competencies

This chapter is about locating an evaluator well matched to your project needs. Whomever you select will influence the shape of your evaluation plan and the results that you get from it. We'll start by looking at some choices you'll need to make while searching for an evaluator. Next we'll provide an orientation to the expertise and skills that your evaluator should possess. After that we'll share some issues to keep in mind when adding an evaluator to your project team. Finally, we'll suggest some ways to locate an evaluator...

Mary Marcussen is a national grant writer and project design specialist with a reputation for high standards and professionalism in project and proposal development. Her record includes more than 40 successful proposals to the National Science Foundation to support museum exhibitions, planetarium shows, large format films, informal learning research, community and youth programs, and projects to build infrastructure for the field of informal science education. She is the former National Grants Manager for the California Academy of Sciences, prior to which she served as a systematic biologist and senior educator for the museum. With a B.A. in Biology, she has conducted field research for a variety of nonprofit and government agencies. Her development experience includes capital campaigns and high production corporate, foundation, and government grant work. She works with Principal Investigators to effectively manage both the people and the process involved with proposal development including research, project design, partnerships, and evaluation.

Identify your project requirements

Before we get started, imagine setting up your best friend with a blind date. First you critique each candidate's credentials, background, interests, job experience, credibility, social connections, and, of course, interest in dating your friend. These factors combine to shape your perception of candidates and, ultimately, your final selection. After the big night, you are eager to learn whether your friend found a good match. You have invested significant time and effort in your role as matchmaker, and you hold yourself personally liable for the outcome of the relationship. Your friend is depending on you, and you desperately need this blind date to "work out."

Choosing an evaluator who matches the needs of your project is much like setting up a blind date. Before considering the skills and qualifications of a potential evaluator you must first establish some basic parameters for your project. To skip this step would be like selecting a date for your best friend without knowing what type of person interests them. Or like interviewing candidates for a job without a job description.

For example, you will need to decide if your evaluator should be experienced in formative or summative evaluation or should be someone who can do both types. You'll also want to understand your options for hiring an internal (in-house) as opposed to an external evaluator; for choosing an evaluation firm vs. an independent contractor; and for working with a local evaluator vs. someone located across the country. You also need to consider any special skills you may require-do you need an evaluator who speaks another language? Or one who is familiar with a particular culture or subject matter?

If this process seems a far stretch from setting up a blind date, think of your project as your best friend (for whom you will do anything) and the evaluator as the date (whom your friend needs to be happy). After you read the following example of a project team choosing an evaluator, we will look critically at each of the decisions made throughout the process.

True Stories: Articulate your project's evaluation needs before hiring

In a project to revitalize static wildlife dioramas created in the 1950s the Oakland Museum of California aims to connect visitors more deeply to urgent environmental issues. To help minority audiences in particular make such connections, the museum involved a diversity of visitors in co-design and development of the renovated exhibits.

Hotspot California: Bringing Dioramas to Life Through Community Voices is a project of the Oakland Museum of California that is funded by the Informal Science Education program at the National Science Foundation (DRL 09-15778). The project's evaluation needs were many and varied. First, like most informal science education projects, this one required multiple stages of evaluation, and for the project's summative evaluation, the team required an evaluator with experience in exhibit design and a broad perspective of the museum community. Second, to build institutional evaluation capacity, the museum wanted to involve its staff in the evaluation process. Third, the project team wanted to conduct formative evaluation with exhibit prototypes in a variety of museum settings beyond Oakland. Finally, the project team wanted to examine the effects of visitor involvement in the process of exhibition design.

With a tight budget and timeline, the project team made some critical decisions. First, they selected an independent evaluator to 1) conduct front-end evaluation by collecting baseline data from members of the intended audience, 2) conduct formative evaluation to determine the effectiveness of the exhibit prototypes in helping visitors develop deeper connections to their local environments, and 3) train museum staff to assist with the front-end audience study and with exhibit prototyping and formative evaluation. This particular evaluator had experience with the museum's intended inner-city audiences.

In addition, the project team selected a second independent evaluator to conduct summative evaluation. This evaluator was widely recognized for her expertise in evaluating museum exhibits.

Let's tease apart the process by which the project team made its selections.

Four Fundamental Choices

Your project will have certain qualities and requirements that will influence your choice of evaluator. Here are some of the parameters to consider:

  1. What type of evaluation do you need?
  2. Do you need an internal or external evaluator?
  3. Do the requirements of the project lend themselves to an independent contractor or an evaluation firm?
  4. Will your project be better served by a local or out-of-area evaluator?

Consideration #1: What type of evaluation do you need?

Different types of evaluation require different sets of evaluator expertise and skills. Depending on the goals of your project, you may need to find one "date" who runs cross-country marathons and another who holds season tickets to the ballet. Or someone who does both. In particular, front-end, formative, and summative evaluation each require different approaches. Some evaluators conduct all three types of evaluation while others are particularly skilled at one type. These three types of evaluation were defined in Chapter 2 and they bear repeating here in the context of choosing an evaluator.

Front-end evaluation

Front-end evaluation gathers background information that informs practitioners as they plan and develop a project. Such information-seeking evaluation often takes the form of audience research, gathering data about the knowledge, interests, and experiences of an intended audience. So, it's important for a front-end evaluator to have an understanding of or prior experience with your intended audience.

Formative evaluation

Formative evaluation focuses on learning how to improve or enhance a project. Such improvement-oriented evaluation gathers data about a project's strengths and weaknesses with the expectation that both will be found and the information can be used to make improvements. It can be helpful if your formative evaluator has experience in your particular type of project, for example, film or exhibit production, or educational technology, or professional development for informal STEM education practitioners.

Summative evaluation

Summative evaluation determines a project's overall effectiveness and value. Such judgment-oriented evaluation can be conducted at interim points or at the end of a project, and is particularly important in making decisions about continuing, replicating, or terminating a project. Summative evaluations are often requested or required by funders, including the National Science Foundation. Deciding which data are essential and determining how much data can be collected requires experience and careful thought.

Hiring different evaluators for formative and summative

In the case of Hotspot California, the Oakland Museum of California chose to establish formative and summative evaluation as separate roles. Hiring separate evaluators for the different phases of evaluation is not always necessary, and working with multiple evaluators requires developing a collaborative evaluation plan. For this project, however, the two evaluators brought complementary skills and experience. Both understood natural history dioramas and had evaluated them previously. However, one evaluator understood the intended inner-city audience while the other had a deep experience with the type of summative evaluation that the museum wanted to conduct.

Consideration #2: Should you choose an internal evaluator or an external evaluator?

Another important decision is whether to engage an internal (in-house) or external evaluator or to employ an internal evaluator who works with an external consultant. While a funder may dictate this decision -- the National Science Foundation, for example, typically requires an external evaluator for summative evaluation -- Principal Investigators and their institutions often spend a great deal of time weighing the pros and cons of each option.

Internal evaluator: pros and cons

Internal staff are typically more accessible for team meetings and their cost may be covered by an institution's operating budget (CPB 2011). In-house evaluators also are familiar with the culture of the organization and the project team and may have working knowledge of the project's subject matter. On the other hand, internal evaluators may be invested in the outcome of the evaluation, caught up in internal politics, or hampered by supervisory relationships.

External evaluator: pros and cons

External evaluation is typically more expensive. However, the results often bear more weight because the evaluator is considered to have no vested interest in the project outcomes. An external evaluator also may be able to work more independently from the producers of the project deliverables (see Kellogg 2004, p. 58).

Combining internal and external evaluators

Some projects blend these approaches by having internal staff conduct evaluations under the guidance of an independent evaluator who reviews the evaluation design and assesses the validity of the findings and conclusions. This approach can maintain external expertise and impartiality along with the benefit of an internal person's first-hand project knowledge.

Teaming up internal and external evaluators

The Oakland Museum hired external evaluators, and had them train museum staff to become deeply involved in the evaluation process during the course of the Hotspot California project. If this approach helps the museum to develop internal evaluation expertise, in the future the museum may be able to limit the role of external evaluators to independent oversight.

The following chart weighs a variety of factors when considering internal versus external evaluation.

Comparison of Internal vs. External Evaluators (adapted from Conley-Tyler 2005, Kellogg 2004, and Patton 2008)


Internal Evaluator

External Evaluator


Internal evaluators work in the environment in which the project operates and may have firsthand knowledge of the project, content, and organizational policies and practices.

External evaluators may possess special skills or exposure to a wide range of methods and practices that would be useful to incorporate.

Perceived bias

There may be a perception of bias if the internal evaluator is "too close" to the subject matter.

Perceived impartiality is a strong argument for the use of external evaluators.


Staff evaluators are readily available for project meetings or spontaneous data- collection activities.

Local evaluators can be readily available or can use telecommunications when needed.


Internal evaluators on salary can have an advantage over external evaluator fees. However, it can be expensive to maintain an idle staffer in between projects.

External evaluation fees can be high compared to salary, but can be cost-effective when the evaluation is needed only part time or for a limited duration.

Organizational investment

Over time, an internal evaluator can build an organization's capacity to support evaluation. However, this might not be a priority for an organization that will conduct evaluation on an infrequent basis.

External evaluators can acquaint staff with the value and methods of evaluation and can train staff in data-collection techniques. This can build a culture of evaluation within an institution.

Consideration #3: Should you choose an evaluation firm or an independent contractor?

The question of whether to hire an evaluation firm or an independent contractor hinges on several factors.

Evaluation firm: pros and cons

A robust evaluation can require a cadre of well-trained staff (data collectors, transcriptionists); the necessary equipment (data storage, statistical software, cameras, recorders); and infrastructure (office space, supplies). A project team that requires such support may look to an evaluation firm and should factor in the potential for higher institutional costs.

Independent contractor: pros and cons

Conversely, you may be looking for the nimbleness of an individual contractor who is your direct contact for all aspects of the project and operates with lower overhead. In that case you may need to determine whether your project can be evaluated without the more extensive personnel and resources needed for complex studies.

Deciding on an independent contractor vs. an evaluation firm

In our example from Hotspot California, the Oakland Museum needed a formative evaluator who could respond quickly and efficiently to project developments and who could travel last minute for timely data collection. The museum also required a summative evaluator who could perform data analysis in cooperation with the formative evaluator. In this case, the nimbleness of two independent contractors allowed for timely, efficient, and cooperative evaluation.

Consideration #4: Should you choose a local or an out-of-area evaluator?

When choosing an evaluator you also need to consider the location of potential evaluators relative to your institution and any project partners. If your project is intended to reach your local or regional community, working with a nearby evaluator who understands your local audience and issues can be sensible. However, many informal STEM education projects are collaborations between multiple institutions located in different states or regions of the country. In this case you may not need to be constrained by location in choosing your evaluator as long as you build in the resources that are necessary to support a long-distance evaluator to travel to your project location or locations. Communication among team members via wiki, teleconferences, or Skype can also allow evaluators located cross-country to participate actively in the team.

For the project at the Oakland Museum of California, the project team did not see proximity as an issue when selecting two evaluators from out of state. While the issue can boil down to the cost of a local vs. long-distance evaluator's travel to conduct the work, this issue should be considered in the context of factors that may override location.

Evaluator Qualifications

Once you have a handle on some of the fundamental choices involved in selecting an evaluator, you are ready to begin thinking about the actual evaluator you need. But before scheduling interviews with prospective evaluators, sit down with your project team and identify the skills that are particularly important for your project. Experience shows that the most important overall characteristics of a successful evaluator are the abilities to remain flexible and to solve problems (Kellogg 2004). In addition to these key traits, consider the following criteria: education and background experience; content expertise and experience with similar projects; ability to handle multiple project deliverables; experience with the audience that you are serving; and any unusual aspects of your project which might demand special skills.

Education and background experience

Most professional evaluators have at least a Master's Degree in evaluation or a field related to evaluation, including science and social science. Many have variable backgrounds such as Peace Corps volunteers, educational researchers for technology firms, or senior staff of major conservation organizations. Such experience can enrich your project's content development or audience connections. Look for knowledge and attitudes regarding evaluation that suggest a compatibility with your project and evaluation goals.

Query potential candidates about their experience in the areas of evaluation design, data collection, and data analysis. Many evaluators have expertise in specific areas such as ethnographic research, statistics, outcomes-based evaluation, timing and tracking, focus groups, bilingual evaluation, or participatory evaluation. This information is crucial to know, depending on the needs of your project and the roles that you have envisioned for your evaluator. Focus group facilitators need to be able to manage groups.

Interviewers must be supportive and skilled listeners. Ask candidates about their expertise in qualitative, quantitative, and mixed methods evaluation. While some evaluators have a preference for one particular technique, a combination of approaches is likely to provide the most useful information.

Ask prospective evaluators what they need to know about your project goals, objectives, and desired outcomes before they can determine appropriate evaluation approaches and methods. Beware of an evaluator who assures you that they know how to evaluate your project before learning enough about it to determine an evaluation plan!

Content expertise and experience with similar projects

Locating a top-notch evaluator with solid credentials is more important than finding an evaluator who knows the specific content area of your project. Consider generalists who are able to grasp your project quickly or specialists who are aware of their subject biases.

Regardless of content expertise, your evaluator should be trained in the evaluation of projects similar to yours or have a track record of completing successful evaluations of similar projects. For example, evaluators will have differing experiences in youth and teen programming, citizen science, family learning, social technologies, virtual worlds, radio, gaming, planetarium shows, live theater, or communities of practice.

Ability to handle multiple project deliverables

Some informal science education projects combine several types of deliverables. Cecilia Garibay (2008) discusses the complexities of evaluating the outcomes of such projects in the Framework for Evaluating Informal Science Education Projects (Friedman 2008, p. 96). It is common, for example, to see NSF-funded exhibitions that include related educational programming, or a television series with an accompanying educational website, or collaborative projects among organizations that include components for both public and professional audiences. In some cases a suite of integrated components is designed to work together as a whole to achieve impact. If you are developing such a project, your evaluator must devote resources to evaluating each of the pieces. Understanding how the components interact is critical to developing appropriate evaluation strategies that accurately measure the outcomes of each. For example, your evaluator might suggest that one innovative deliverable is critically evaluated while others receive less review.

Cultural competence

Evaluators interact with a broad range of people from many political, religious, ethnic, language, and racial groups and need special qualities to conduct culturally competent work. Frierson, Hood, Hughes, and Thomas state in The 2010 User-Friendly Guide to Project Evaluation (NSF 2010a, p. 75): "Culturally responsive evaluators honor the cultural context in which an evaluation takes place by bringing needed, shared life experiences and understandings to the evaluation tasks at hand and hearing diverse voices and perspectives. The approach requires that evaluators critically examine culturally relevant but often neglected variables in project design and evaluation. In order to accomplish this task, the evaluator must have a keen awareness of the context in which the project is taking place and an understanding of how this context might influence the behavior of individuals in the project."

The American Evaluation Association affirms the significance of cultural competence in evaluation, stating: "To ensure recognition, accurate interpretation, and respect for diversity, evaluators should ensure that the members of the evaluation team collectively demonstrate cultural competence. Cultural competence is a stance taken toward culture, not a discrete status or simple mastery of particular knowledge and skills. A culturally competent evaluator is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Culturally competent evaluators respect the cultures represented in the evaluation throughout the process." (AEA 2011)

Your evaluator will need to develop a trusting relationship with the audience for your project and should have experience in doing so. You will want to locate an evaluator who has developed an understanding of your intended audience and the context in which your project will be implemented.

Cosmic Serpent, a project of the Indigenous Education Institute and UC Berkeley Space Sciences Laboratory, provides an excellent example of a culturally responsive evaluation.

True Stories: Blending Native American and Western evaluation methods

A national evaluation firm collaborated with a Native American consultant to provide evaluation for Cosmic Serpent. The project was conceived to build capacity among museum educators to bridge native and western science learning in informal settings. The team developed an evaluation design using the Diné (Navajo) model, in which native and western evaluation methods are equally valued and respected. All aspects of front-end, formative, and summative evaluation were then conducted collaboratively between the evaluation firm and the Native evaluator. Instrument design, data collection, analysis, and interpretation were all conduced collaboratively, with validation provided by each party.

This collaborative partnership allowed the team to take multiple viewpoints into account and to explore issues surrounding the cultural context of educational evaluation. This process increased the capacity of the indigenous evaluator, who is experienced with the evaluation of Native populations, and of the researchers at the evaluation firm, who have experience in museum and professional development evaluation and the assessment of informal science learning. The front-end and summative evaluation reports for Cosmic Serpent are posted to

Other special situations and skills

In addition to planning for culturally responsive projects and projects with multiple components, evaluators often need special skills and competencies to deal with challenging situations (Kellogg 2004, p. 61; adapted from Patton 1997, p. 131).



Special Skills

Highly controversial issue

Facilitating different points of view

Conflict-resolution skills

Highly visible project

Dealing with project publicly; reporting findings in media-circus atmosphere

Tolerance for ambiguity, rapid responsiveness, flexibility, quick learner

Highly volatile project environment

Adapting to rapid changes in context, issues, and focus

Cross-cultural sensitivity, skilled in understanding and incorporating different perspectives

Evaluation attacked

Preserving credibility

Calm, able to stay focused on evidence and conclusions

Corrupt project

Resolving ethical issues/upholding standards

Integrity, clear ethical sense, honesty

Adapted from M. Q. Patton, 1997, Utilization-Focused Evaluation, p. 131. In W.K. Kellogg Foundation 2004, W. K. Kellogg Foundation Evaluation Handbook, p. 61.

Familiarity With Standards and Guidelines for the Field

Your evaluator should demonstrate familiarity with the best practices in designing and evaluating informal science learning. Guideposts include the Framework for Evaluating Impacts of Informal Science Education Projects (Friedman 2008) and Learning Science in Informal Environments (NRC 2009) Surrounded by Science (National Research Council 2010), and the User-Friendly Handbook to Evaluation (Westat 2010). Familiarity with the competencies of evaluation articulated by the Visitor Studies Association (VSA 2008) and the principles of evaluation espoused by the American Evaluation Association (AEA 2004) and the Joint Committee Standards (JCSEE 2011) also are important. An evaluator knowledgeable about these resources will be able to help you articulate the objectives and outcomes for your project and develop a project logic model as part of an appropriate evaluation plan.

To get a clear sense of an evaluator's work, request evaluation reports that he or she has previously prepared. Are they readable and understandable? Do they meet your expectations and standards? You also can evaluate an evaluator's professionalism by factors such as the appearance of their website and the quality of their written communications. And check references!

Communication style with colleagues and funders

Methodological knowledge is not sufficient to conduct and report on a high-quality evaluation. Evaluators also require skills in stakeholder involvement, contract management, and written and oral communication. Concise summaries and creative use of electronic media are important means of delivering evaluation findings. Evaluators also should demonstrate willingness to have their work vetted by colleagues and to respond to their critiques.

Perspectives of an evaluator

Up until this point, as matchmakers we have focused on the needs of your best friend, i.e., your project. But what are your potential dates looking for? Both parties must contribute to a perfect match. It's time to consider what evaluators seek in a project.

Above all is the ability to conduct their work within a professional context in which the project developers have clear goals and objectives and understand enough about the process of evaluation to support and value their work. Team cohesiveness is another factor. Your evaluator needs team members who can facilitate the project work plan, develop the deliverables, nurture the project partners and advisors, and communicate with funders. Some evaluators look for projects that will further their professional interests, for example early childhood science education, educational media, or cultural science learning. A firm plan for dissemination in terms of publications and conference presentations enables them to share their work with the field.

Locating an evaluator

Now that you have a good search image for your evaluator, it's time to complete your role as matchmaker by finding one. There are many ways and places to locate qualified candidates. While not quite as easy as makes it seem, below are some strategies related to informal science education.

Look at evaluator databases and related listservs.

  • maintains an updated list of ISE evaluators. Each entry includes the evaluator's affiliation, professional bio, and interest and expertise descriptors, along with selected research and publications.
  • maintains a "find an evaluator" database.
  • The American Association of Museums Committee on Audience Research and Evaluation (AAM-CARE) publishes a directory of evaluators for AAM-CARE members.
  • The Visitor Services in Museums Listserv (VSMUS) operates as a forum to bring together museum professionals and others concerned with the quality of the visitor experience in museums.
  • EVALTALK is a listserv of over 2,000 members hosted by the American Evaluation Association.

Network with other NSF grantees. Principal Investigators who have implemented projects similar to yours may be able to suggest evaluators who will be a good fit with your project. A strong personal recommendation and a discussion of an evaluator's strengths and weaknesses from someone who has worked with that individual can be extremely useful (NSF 2010a, pg. 128). Several resources can be pursued:

Evaluation literature

  • Find evaluation studies of projects like your own. Peer-reviewed journals with articles based on evaluations of informal science education projects include: Curator: The Museum Journal, Journal of Museum Education, International Journal of Science and Education, International Journal of Learning and Media, Afterschool Matters, Cultural Studies in Science Education, Journal of Research in Science Teaching, Journal of the Learning Sciences, Science Education, Studies in Science Education, Visitor Studies, American Journal of Evaluation, Evaluation, New Directions for Evaluation, and Evaluation and Program Planning, among others. The Research2Practice website contains a set of briefs summarizing recent peer-reviewed educational research. The briefs are written with the interests, needs, and institutional settings of informal science educators in mind.

Conference presentations

Putting the team together

It is unlikely, but not impossible, that you will find a blind date who meets all of your best friend's dreams. It is also unlikely that you will find an evaluator who is representative of your intended audience, knowledgeable about your specific content area, experienced with your type of proposed deliverables -- and available. Most important is locating an evaluator whose skills and experience, along with those of the other people involved with the project, create a cohesive and well-rounded team. And be sure that you and your team can work with your evaluator and enjoy the experience!

True Stories: Rounding out a project team with the right evaluator.

An example of efficient team building is illustrated by a professional development project of the Astronomical Society of the Pacific. Astronomy From the Ground Up (DRL 04-51933) was designed to build the capacity of informal STEM educators in science museums to deliver astronomy to their visitors more effectively. The PIs included astronomers and astronomy educators from the Astronomical Society of the Pacific and the National Optical Astronomy Observatory, along with the Director of Exhibitions, Research, and Publications for the Association of Science-Technology Centers. So the science content, pedagogy, and representation of the intended audience were covered. What the project needed was an evaluator familiar with the culture of informal STEM educators who could gauge their experience with the project and determine the impact on their daily work. The evaluator also needed the capacity to assess the project at multiple sites and on multiple levels including on-site and distance learning workshops.

The team selected an evaluator whose background as a planetarium educator and whose experience as an educational researcher in science museums provided the expertise necessary to work closely with astronomy educators. This evaluator, assisted by a team of researchers at his firm, conducted the front-end, formative, and summative evaluation, with independent validation of the summative evaluation design provided by an external consultant.


Once you've put together a good team, you are well on your way to carrying out an accurate and valuable evaluation. Keep in mind that an important part of an evaluator's job is to assist in building the skills, knowledge, and abilities of other project team members and stakeholders.