Pages

Wednesday, December 27, 2023

Digital Pedagogy Toolbox: Integrating Digital Literacy Practices

I recently wrote for BCcampus as a part of their "Digital Pedagogy Toolbox Series". In my article, I explored how educators and learners can develop digital competencies and skills by using the B.C. Post-Secondary Digital Literacy Framework as a roadmap. The framework includes eight thematic competencies within digital literacy: 1) ethical and legal; 2) technology; 3) information literacy; 4) digital scholarship; 5) communication and collaboration; 6) creation and curation; 7) digital well-being; and 8) community-based learning.

What #DigitalLiteracy was 20 years ago is certainly not the same today. Today, as learners, educators, and digital citizens, we are not just consumers of digital content, we are also creators, curators, and contributors and along the way, we are all leaving our digital footprints behind.

#DigitalCompetencies that underline digital literacy involve a deeper understanding of how to use digital tools and technologies in various contexts, adapt to new digital environments, and critically evaluate digital information.

As Educators, integrating digital literacy into every aspect of the learning journey is not just a pedagogical choice; it’s an ethical imperative.


Digital Pedagogy Toolbox: Integrating Digital Literacy Practices

Tuesday, December 12, 2023

Activity vs. Performance Measures in Training Evaluation

Image by Gerd Altmann from Pixabay

Organizations are looking to invest in employee training and ongoing development. To make sure these learning and development investments are effective, it is important to evaluate and measure their impact. But which metrics should you be keeping track of? Which measures offer a clearer understanding of training impacts? 

Evaluating the impact of training initiatives can be done using activity measures and performance measures.

Activity measures focus on the "process of learning" and include different aspects of the learning journey. These measures help assess learner participation in various activities such as training completion time, attendance records, and allocated time for specific activities during training, etc. For example, when measuring the impact of software training, activity measures may include tracking how many users participated in the program, what percentage completed it successfully, and how much time was dedicated to working on the software during the training.

Performance measures focus on the "outcome of learning" and help assess how well learners can apply their knowledge to real-world tasks and situations. Job performance, customer satisfaction, and test scores are some examples of the types of measures that can be included in this category. For example, when measuring the impact of software training, performance measures can include the number of errors made by software users, the scores on a test of software proficiency, and customer satisfaction with the work produced by users of the software.

Activity measures are good for assessing initial engagement and training completion while performance measures provide a better picture of how well employees are applying their newly acquired skills.

However, there are challenges in using performance measures. For example, it is difficult to identify performance measures that are directly related to the training and that accurately reflect the desired outcomes. Collecting and analyzing performance data is also costly and time-consuming. Most importantly, it is challenging to isolate the impact of the training and identify if any changes to the performance were directly due to the learning or training initiative, as opposed to other factors such as changes in the work environment. While it is good to plan the evaluation of learning and training projects using performance measures, using performance measures may not be applicable or cost-effective for every kind of training.

The combination of activity measures and performance measures is usually the best option for gathering significant and valuable data. However, what makes up the appropriate combination of these measures may differ according to the particular training or learning approach and intended results.


Friday, December 1, 2023

Navigating Generative AI Citations in Your Writing

There has been much conversation about how generative AI should be cited in written work. I recently wrote an article for BCcampus and leveraged ChatGPT for one of the sections specifically to assist in building on my key ideas and in generating some of the scenarios. 

There were no existing author-writing guidelines or policies at BCcampus that permitted or prohibited the use of AI tools (at the time of writing the article). But as the author, it was important for me to be ethical and transparent about my writing work and take full accountability and responsibility for my ideas. And I was supported and encouraged by the team at BCcampus to do so.

Based on my limited research on how AI is cited in scientific writing, I wanted to acknowledge the use of ChatGPT as a ‘method’ of research used for a specific section. I also referred to Canada’s Guide on the use of Generative AI and the “FASTER” principles.

This is the reference that I crafted and we decided to include it in the section for which ChatGPT was leveraged:

"ChatGPT, an AI-powered language model, was used to assist in building on the key ideas by the author and generating some of the digital literacy application scenarios. Specific and targeted prompts were crafted to elicit initial scenario drafts. Various options were generated, which were then edited, curated, and refined to create the final scenarios. Throughout the process, human judgment and expertise guided in ideating, prompting, writing, editing, curating, selecting, and shaping the scenarios to ensure they were appropriate and aligned with the goals of the article. The author was mindful of the limitations and potential biases of AI systems and approached the use of ChatGPT critically and ethically."

My intent in including this paragraph was to promote the responsible, transparent, and ethical use of AI in my writing. Some may find this useful; some may find it unnecessary or something that can be omitted or perhaps even ignored.

What do you think of this reference? Do you use ChatGPT for your work? How are you including it in the reference? What are some guidelines and principles you are following in using it fairly and transparently?

PS: If you are interested in reading the article I wrote, you can find it here: 
https://bccampus.ca/2023/11/15/digital-pedagogy-toolbox-integrating-digital-literacy-practices/

The article highlights how educators and learners can develop digital competencies and skills by using the B.C. Post-Secondary Digital Literacy Framework as a roadmap. It explore the key facets of digital literacy in the context of the Digital Literacy Framework with a scenario that highlights practical strategies for how digital literacy can be implemented into every aspect of the learning journey by educators and learners. ChatGPT was used to assist in building on my key ideas and generating some of the digital literacy application scenarios.

The BCcampus Open Education Team has since released its guidelines on leveraging generative AI tools for OER content creation

#ChatGPT #AIEthics #AIGuidelines #AIForWork #Writing #ArticleWriting #CitingAI #AIReference