Choosing an Electronic Portfolio Strategy that Matches your Conceptual Framework
©2004, Helen C. Barrett, Ph.D. and Judy Wilkerson, Ph.D.
There are a variety of strategies for developing electronic portfolios. David Gibson and I outlined two different technological approaches to electronic portfolio development: using generic, off-the-shelf tools (GT) or using customized, online systems (CS). That article presents a series of rubrics to evaluate the electronic portfolio services from the perspective of a variety of criteria covering both approaches. In this paper, I will address the underlying philosophy or conceptual framework that should be considered when selecting a strategy for developing electronic portfolios.
Many Teacher Education institutions are adopting electronic portfolios from a perception that the accreditation organizations are requiring them, which is not accurate. NCATE Unit Standard 2 requires that an institution adopt an assessment system, that "collects and analyzes data on applicant qualifications, candidate and graduate performance, and unit operations to evaluate and improve the unit and its programs." In the Proceedings of the 2004 Conference of the Society for Technology and Teacher Education, my paper [PDF] outlines the differences between electronic portfolios and online assessment management systems. I have also attached a case study from a faculty member in a teacher education program that adopted one of the commercial e-portfolio systems, and recommendations that come from that experience.
At the 2004 AACTE conference, one university presented a session on how to make the decision about an electronic portfolio strategy, based on their very thorough needs assessment process. Most of the criteria and discussion dealt with collecting student work and showing numerical assessment data for accreditation reports. In a live web conference on E-Portfolios, sponsored by Educause, Barbara Cambridge discussed strategies that support deep student learning. This paper will focus on matching an institution's conceptual framework and philosophical orientation with the e-portfolio strategy to pursue, using cognitive learning theory and activity theory as the foundation for approaching the decision. I will also reference the National Academy of Sciences book Knowing What Students Know to design a combined approach to meet both the needs of the institution and the learners.
What is an electronic portfolios? Here is a definition established by the National Learning Infrastructure Initiative (NLII, 2003):
An electronic portfolio is
The literature on traditional paper-based portfolios involves these portfolio development processes: Collecting, Selecting, Reflecting, Projecting, Celebrating. The infusion of technology into the process adds the following dimensions to this process: Archiving, Linking/Thinking, Storytelling, Planning, Publishing. To effectively use portfolios for assessment, a learning organization needs to establish a culture of evidence. Evidence in an electronic portfolio is not only the artifacts that a learner places there; to be considered evidence of learning, the artifacts need to be accompanied by the learner's rationale, or their argument as to why these artifacts constitute evidence of achieving specific goals, outcomes or standards. Furthermore, just because a learner makes the claim that their artifacts are evidence of achievement, in "high stakes" environments, the evidence needs to be validated by a trained evaluator, using a well-developed rubric with identifiable and specific criteria.. This process can be represented by a simple formula: Evidence = Artifacts + Reflection (Rationale) + Validation (Feedback) (Barrett, 2003).
In her recent webcast on Electronic Portfolios: Why Now?, Barbara Cambridge of the American Association for Higher Education, identified these principles for deep learning:
She gave many illustrations that supported each of these principles. One example pointed out the importance of using e-portfolios to support durable learning... learning that lasts:
At Bowling Green University emphasis is on durable learning, learning that lasts beyond a course. The newly emerging science of learning offers a growing body of principles and research findings to be systematically applied. E-portfolio technology offers learners the means to document and reinforce their learning. Bowling Green looks at how science of learning principles are applied to tangible activities.
She illustrated the importance of reflection in learning by quoting one of the founders of modern learning theory:
"To reflect is to look back over what has been done so as to extract the next meanings which are the capital stock for intelligent dealing with further experiences. It is the heart of the intellectual organization and of the disciplined mind." – John Dewey, Experience and Education, 1938.
From IUPUI, one of the pioneers on online portfolio systems development, here is a statement about the types of skills that are associated with electronic portfolio development:
“We are finding that a variety of transcendent skills, such as critical thinking, communication, information literacy, and understanding diverse societies and cultures, are being highlighted through the development of ePortfolios.” -- Sharon Hamilton, IUPUI
There have been many commercial tools come to market in the last three years, claiming to answer the needs of institutions to meet accreditation requirements, by taking advantage of Internet technologies. Many of these systems promise support for student portfolios AND aggregated assessment data to meet reporting requirements. This paper will discuss the difficulties of meeting these two diverse needs with a single product. That is because the designers of these products are combining two different paradigms of portfolios which, by their very nature, are in conflict with each other. Pearl and Leon Paulson (1994) outlined these differences more than ten years ago:
“The purpose of the portfolio is to assess learning outcomes and those outcomes are, generally, defined externally. Positivism assumes that meaning is constant across users, contexts, and purposes… The portfolio is a receptacle for examples of student work used to infer what and how much learning has occurred.” (p.36)
“The portfolio is a learning environment in which the learner constructs meaning. It assumes that meaning varies across individuals, over time, and with purpose. The portfolio presents process, a record of the processes associated with learning itself; a summation of individual portfolios would be too complex for normative description.” (p.36)
Tension between two approaches
“The two paradigms produce portfolio activities that are entirely different...
“The positivist approach puts a premium on the selection of items that reflect outside standards and interests....
“The constructivist approach puts a premium on the selection of items that reflect learning from the student’s perspective.” (p.36)
Teacher Education institutions need to meet both of these purposes of licensure/certification vs. continued growth and development. The positivist approach is appealing to those with interests in licensure/certification because there is a requirement that learning be held constant. And that is good. It provides a common core against which growth can be measured and flourish. We are establishing minimal competency that makes the practitioner “safe to teach” and then can be used to build a career ladder. There may be a hierarchy here. The Positivist approach is “the floor below which they cannot fall”. The constructivist approach is where we hope our teacher candidates will go above the floor, showcasing the many ways that they are going beyond minimum requirements, to make their classrooms exciting places to learn.
A recent article published online in the Education Policy Analysis Archives, highlights the legal and psychometric issues of using portfolios for high stakes assessment. Wilkerson & Lang's (2003) article entitled, "Portfolios, the Pied Piper of Teacher Certification Assessments: Legal and Psychometric Issues" points out the potential legal liabilities for teacher education programs who want to use portfolios to meet accreditation requirements and to make high stakes decisions related to licensure/certification. They quote a recent AACTE study that found: 90% of SCDE’s use portfolios to make decisions about candidates and 40% do so as a certification or licensure requirement (Salzman, et.al, 2002).
These authors point out that, "Portfolio assessments, like all high-stakes tests, must stand the tests of validity, reliability, fairness, and absence of bias." Their paper describes several scenarios where a student could pursue legal action if denied certification from adverse decisions based on a high stakes portfolio. They point out that portfolios "are excellent tools for reinforcing learning and for making formative decisions about candidate knowledge, skills, dispositions, and growth." The last section of the paper contains a chart that describes requirements for tests and caveats for portfolios where graduation and certification decisions will be made in a standards-based environment. Here is a very brief summary of those issues:
Wilkerson and Lang (2003) suggest a more limited and focused use of portfolios to measure specific, job-related skills within the certification/licensure context. While strongly supporting the use of portfolios for learning and showcasing, they suggest the use of targeted portfolios within the certification/licensure process. They provide an example of a K-12 Student Portfolio that can be used to document the candidate’s ability to teach critical thinking and content knowledge to children. "By limiting the use and complexity of portfolios [for certification/licensure], the long known values of portfolio assessment can be realized without burdening faculty and students with excessive requirements that have limited use and without taking serious psychometric and legal risks."
In summary, these authors point out that showcase portfolios are vulnerable to legal challenge when used for a purpose other than that for which they were intended:
In a workshop conducted by these authors prior to the 2004 AACTE conference, they also discussed levels of inference in portfolio use with regard to certification/licensure and accreditation. An inference is a conclusion drawn from a set of facts or circumstances, forming an opinion or a guess that something is true because of the information available. The following table was created by Wilkerson and Lang (2004) to illustrate the amount of opinion or estimation required by different types of artifacts in a portfolio. In the instance of low inference, the tasks are pre-determined, there are clear rubrics for evaluation, and the tasks are embedded in the curriculum and evaluated as part of regular coursework. Examples of medium inference include some student choice in the selection of evidence, which results in more effort for faculty to make judgments about the appropriateness of the choices in demonstrating the standards. High inference means that students have a wide choice of artifacts, and faculty often need to reassess what has already been assessed. A true student-centered portfolio is a powerful tool for learning and formative assessment, but requires a lot more effort and judgment to be used in high stakes evaluation.
Item selection pre-determined
No faculty time, no faculty skill if task rubrics included.
Excellent for accreditation and accountability
Students have some choice, typically from a list of options
Faculty decide if students made the right choices and assess the composite collection
Tricky for accreditation, especially if no task rubrics included.
Students have wide choices
Faculty reassess what was already assessed
Learning and formative evaluation. Not useful for accreditation.
Wilkerson & Lang (2004)
How do we match the needs of the institution for valid and reliable data for accreditation and accountability while still meeting the needs of learners for formative assessment to enhance and support the learning process? These are issues that are currently being addressed by many experts in the educational community. In 2001, the National Research Council published a guidebook on educational assessment: Knowing what Students Know: The Science and Design of Educational Assessment. As stated therein, here is the purpose of educational assessment:
Educational Assessment seeks to determine how well students are learning and is an integrated part of the quest for improved education. It provides feedback to students, educators, parents, policy makers, and the public about the effectiveness of educational services.” (p.1)
There are several factors that are improving assessment: advances in cognitive sciences, a broadened concept of what is important to assess, advances in measurement sciences, and expanded capability to interpret more complex forms of evidence. It is important to emphasize the importance of multiple measures in assessment: one type of assessment does not fit all situations. A single assessment is often used for multiple purposes which is a problem: “…the more purposes a single assessment aims to serve, the more each purpose will be compromised.” (p.2) Assessment is always a process of reasoning from evidence and is imprecise to some degree. Results are only estimates of what a person knows and can do. Therefore, we need a richer and more coherent set of assessment practices.
Every Assessment Rests on Three Pillars
The Assessment Triangle
National Research Council (2001) Knowing What Students Know. (p.44)
The challenge for us is to find e-portfolio strategies that meet the needs of both the students, to support this deep learning, and to give the institution the information they need for accreditation purposes. In many states, the NCATE accreditation review is conducted collaboratively with a state program approval review which may include licensure and certification decisions; hence, the focus on psychometric and legal issues. State agencies must be confident that graduates collectively meet the state’s requirements for teaching professionals. The review process, therefore, requires the aggregation of data for use in determining student quality, program quality and continuous improvement. Since the fundamental principle of a student-centered portfolio is to allow choice by the student in the collection of artifacts, aggregation of data becomes virtually impossible.
The commercial systems available at the beginning of 2004 require effort and creativity to implement both processes without compromising one or the other. These conflicting paradigms require a multi-faceted system, one that allows a learner to build a meta-tagged digital archive of artifacts, one that helps teacher candidates build learner-centered constructivist portfolios using those artifacts, and another that lets an institution collect the assessment data that meets their accreditation requirements.
How do we create an Institution-Centered Assessment and Accountability System without losing the power of the portfolio as a student-centered tool for lifelong learning and professional development? How do we teach them to use sound assessment based on established performance expectations? How do we maintain the authenticity of the portfolio process and help our teacher candidates develop the skills and attitudes necessary to implement this strategy with their own students once they have their own classrooms? The answer is modeling! Here is a design of an accountability system that is based on the Assessment Triangle:
Assessment Management System
based on Assessment Triangle
Congruence with Conceptual Framework
Create a system that is congruent with your underlying learning philosophy or conceptual framework while still also aligned with state and national standards. There are many often contradictory or complementary paradigms in educational philosophy: behaviorism vs. constructivism; positivism vs. hermeneutics; portfolio as test vs. portfolio as story. What is the underlying philosophical base of your program? How does the strategy you choose fit into this model?
Tasks, Rubric, Record of Achievement
Identify tasks or situations that allow one to assess students’ knowledge, skills, and dispositions through both products and performance. Create rubrics that clearly differentiate levels of proficiency (but use 3 or 4 levels only). Create a recordkeeping system to keep track of the rubric/evaluation data based on multiple measures/methods. Provide opportunities for students to learn and resubmit, maximizing diagnosis and remediation. Model the power of assessment as learning.
Reporting System and Feedback Loop
Create a reporting process to aggregate and analyze assessment data, to be able to draw inferences from performance evidence, and to use for program improvement
Use three different systems that electronically talk to each other:
Click here for a graphic that compares assessment management systems with electronic portfolios.
Looking at this concept map at the bottom center of the graphic, we begin with learning experiences that are embedded in the curriculum. Those learning experiences should produce work that can be stored in a digital archive of learner artifacts that is often called a working portfolio.There are two ways that these artifacts can be used as evidence of learning: in the educational institution’s assessment system and in the learner’s portfolio. This process is interactive and reflective, connecting the artifacts with the learner’s reflection, which is the rationale or justification for using the artifact as evidence of learning. An assessor looks at the artifact and the learner’s reflection, and decides if the artifact meets the guidelines of the performance task as outlined in the associated rubric.
Let’s first focus on the left side of the graphic, which describes the positivist paradigm of evaluation and making inferences, or the “Portfolio as Test” (as described by Wilkerson and Lang). The artifacts in the learner’s archive (the required tasks in the assessment system) are evaluated by an assessor, based on the performance tasks and rubrics. This data is collected for certification or licensure, and for accreditation or program assessment. The results are stored in the institution’s assessment management system, which is an institution-centered data management system. This process results in institution-centered aggregated data leading to certification or licensure for the individual teacher candidate, and supports the institution’s program assessment. The focus of this process in on more limited-term evaluation, with an external locus of control which:
Now, let’s focus on the right side of the graphic, which describes a constructivist paradigm of making meaning and assessment as learning, or the “Portfolio as Story” (as described by Paulson & Paulson). The artifacts in the learners’ archive are collected from the learning experiences, Through the process of reflecting on her own learning, the learner selects artifacts and reflection for self-evaluation based on her self-determined purposes. A learner may create several presentation portfolios, based on multiple purposes and audiences. The process hopefully results in student-centered documentation of deep learning, for developing self-concept and presentation to multiple audiences (peers, employers, etc.). The focus of this process is on lifelong, self-directed learning, with an internal locus of control which:
Paying equal attention to both approaches will result in a more balanced assessment system that supports deep learning.
I’ve been talking assessment of learning and portfolios that support learning, but what about learner motivation? Some preliminary research on student attitudes toward portfolios in general, and electronic portfolios specifically in Teacher Education programs, shows that teacher candidates usually view portfolios as something they have to produce to get out of the program, and many indicated that they would not continue the process after they leave the program (McCoy & Barrett, 2004). Are we graduating a cohort of students that does not value the portfolio as a lifelong learning strategy? Are we doing a disservice to our teacher candidates by having them experience the portfolio only as a high-stakes assessment tool? How can we encourage their intrinsic motivation and engagement in the portfolio process?
There are three general components of the portfolio development process: content, purpose, and process. The content includes the evidence (the learner’s artifacts and reflections). The purpose includes the reason for creating the portfolio, including learning, or professional development, assessment and employment. The process includes the tools used, the sequence of activities, the rules established by the educational institution, the reflections that a learner constructs as they develop the portfolio, the evaluation criteria (rubrics), and the collaboration or conversations about the portfolio.
I propose that there are developmental levels of portfolio implementation in terms of motivation:
When a teacher candidate starts developing their portfolio, they need direction and scaffolding, so the institution provides direction over the content, purpose and process, resulting in an external locus of control. If the goal is to move toward learner’s intrinsic motivation to develop and maintain their portfolios, then there needs to be learner ownership of the content, purpose and process. This diagram illustrates the assumption that greater learner control over each of these components will lead to more intrinsic motivation… but this is a hypothesis that begs to be supported by empirical research.
In a discussion of experienced teachers constructing teaching portfolios in Professional Development Schools (Teitel, Ricci & Coogan, 1998), the following recommendations were made (p.152 ):
How do we take “compliance” portfolios developed in teacher education programs and transform them into professional development portfolios once those teachers are in the classroom? How do we build an expectation that portfolios can be used for long term growth while at the same time using these portfolios for high stakes assessment? Is this an impossible dream? Cognitive dissonance is the irritant to spur us on to new levels of understanding… to see these seemingly disparate phenomena in new contexts.
Educational institutions wanting to initiate electronic portfolio programs need to carefully consider a lot of factors before deciding the strategy they want to use, or the software they will purchase. This paper has tried to articulate the differences between electronic portfolios and assessment systems, with the recommendation to keep the two processes separate, but with communication links between them. Matching the philosophical orientation with e-portfolio tools should reduce the cognitive dissonance and conflicting goals between learners' needs and institutional requirements. The result should be support for deep, sustainable, self-directed, lifelong learning.
Ball State University (2004) Presentation at AACTE Conference, February 10, 2004
Barrett, Helen (2003) Presentation at First International Conference on the e-Portfolio, Poiters, France, October 9, 2003. http://electronicportfolios.org/portfolios/eifel.pdf
Cambridge, Barbara (2004) “Electronic Portfolios: Why Now?” Educause Live Teleconference, February 11, 2004.
DePaul University (2004) "ePortfolio: To Buy or Not to Buy." Presentation at the Association for the American Association of Colleges for Teacher Education (AACTE), Chicago, February 10, 2004. http://homer.itd.depaul.edu/users/tsmith27/conference/AACTE-2004.asp
Lang, W.S. & Wilkerson, J.R.(2004, February 7). Pre-conference workshop handouts, "An Assessment Framework: Designing and Using Standards-Based Rubrics to Make Decisions About Knowledge, Skills, Critical Thinking, Dispositions, and K-12 Impact." 2004 Conference of the Association for the American Association of Colleges for Teacher Education (AACTE), Chicago.
Lucas, Catharine (1992). "Introduction: Writing Portfolios - Changes and Challenges." in K. Yancey (Ed.) Portfolios in the Writing Classroom: An Introduction.(pp. 1-11) Urbana, Illinois: NCTE:
Lyons, Nona (1998) With Portfolio in Hand. Teachers College Press
McCoy, A. & Barrett, H. (2004) Paper presented at the American Educational Research Association meeting, San Diego.
National Research Council (2001) Knowing what Students Know: the science and design of educational assessment. Washington, D.C.: National Academy Press
Paulson, F.L., Paulson, P.R.& Meyer, C.A. (1991) “What Makes a Portfolio a Portfolio?” Educational Leadership, 48:5, pp. 60-63
Paulson, F.L. & Paulson, P. (1994) “Assessing Portfolios Using the Constructivist Paradigm” in Fogarty, R. (ed.) (1996) Student Portfolios. Palatine: IRI Skylight Training & Publishing
Rogers, Douglas (Baylor University) (2003) "Teacher Preparation, Electronic Portfolios, and NCATE." Proceedings of the 2003 Conference of the Society for Information Technology in Teacher Education Conference, Albuquerque, March 24-29. (pp. 163-5)
Shulman, Lee (1998) "Teacher Portfolios: A Theoretical Activity" in N. Lyons (ed.) With Portfolio in Hand. (pp. 23-37) New York: Teachers College Press.
Stefanakis, Evangeline (2002) Multiple Intelligences and Portfolios. Portsmouth: Heinemann
Teitel, L., Ricci, M.& Coogan, J.(1998) "Experienced Teachers Construct Teaching Portfolios: Culture of Compliance vs. a Culture of Professional Development" in N. Lyons (ed.) With Portfolios in Hand. (pp.143-155) New York: Teachers College Press.
Thompson, Sheila & Cobb-Reiley, Linda (2003). "Using Electronic Portfolios to Assess Learning at the University of Denver." Presentation at 2003 AAHE Assessment Conference, Seattle, June 23, 2003.
Wilkerson, J.R., & Lang, W.S. (2003, December 3). Portfolios, the Pied Piper of teacher certification assessments: Legal and psychometric issues. Education Policy Analysis Archives, 11(45). Retrieved [2/15/04] from http://epaa.asu.edu/epaa/v11n45/.
©2004, Helen C. Barrett, Ph.D.
Modified March 15, 2006