Pelatihan-evaluasi-strategi-desain

Pelatihan-evaluasi-strategi-desain

Pilihan pengurang pajak-untuk-tidak memenuhi syarat
Ms-excel-moving-average
Moving-average-forecasting-excel


Pilihan-trading-edge Rbs-forex-rates Jual-cd-tutorial-forex Options-trading-canada-stocks Master-forex-id Interaktif-calo-pilihan-ujian

Pelatihan evaluasi program pelatihan dan evaluasi pembelajaran, formulir umpan balik, rencana aksi dan tindak lanjut Bagian ini dimulai dengan pengenalan evaluasi pelatihan dan pembelajaran, termasuk beberapa model referensi pembelajaran yang bermanfaat. Pengantar juga menjelaskan bahwa untuk evaluasi pelatihan agar benar-benar efektif, pelatihan dan pengembangan itu sendiri harus sesuai untuk pribadi dan situasi. Perkembangan dan evaluasi pribadi modern yang baik melampaui keterampilan dan pengetahuan yang jelas yang dibutuhkan untuk pekerjaan atau organisasi atau kualifikasi. Pengembangan pribadi yang efektif juga harus mempertimbangkan: potensi individu (kemampuan alami yang sering disembunyikan atau ditekan) gaya belajar individu dan pengembangan orang utuh (keterampilan hidup, dengan kata lain). Bila pelatihan atau pengajaran berusaha untuk mengembangkan orang (bukan hanya berfokus pada kualifikasi atau keterampilan tertentu), pengembangan harus didekati secara lebih fleksibel dan individual daripada metode perancangan, pengiriman dan pengujian tradisional paternalistik (otoriter, resep). Prinsip-prinsip ini berlaku untuk pengajaran dan pengembangan kaum muda juga, yang menariknya memberikan beberapa pelajaran yang berguna untuk pelatihan, pengembangan dan evaluasi di tempat kerja. Pengenalan Aspek penting dari evaluasi apapun adalah pengaruhnya terhadap orang yang dievaluasi. Umpan balik sangat penting bagi orang untuk mengetahui perkembangannya, dan juga, evaluasi juga penting bagi kepercayaan peserta didik. Dan karena komitmen masyarakat terhadap pembelajaran sangat bergantung pada kepercayaan diri dan keyakinan bahwa pembelajaran dapat dicapai, cara tes dan penilaian dirancang dan dikelola, dan hasil yang dipresentasikan kembali kepada peserta didik, merupakan bagian yang sangat penting dalam proses pembelajaran dan pengembangan. . Orang dapat dimatikan dari keseluruhan gagasan belajar dan pengembangan dengan sangat cepat jika mereka hanya menerima hasil tes kritis negatif dan umpan balik. Selalu mencari hal positif dalam hasil negatif. Dorong dan dukung - jangan mengkritik tanpa menambahkan beberapa hal positif, dan tentu saja tidak pernah fokus pada kegagalan, atau apa yang akan Anda hasilkan. Ini adalah faktor yang sangat diabaikan dalam segala macam evaluasi dan pengujian, dan karena elemen ini biasanya tidak termasuk dalam alat evaluasi dan penilaian, poinnya ditekankan dengan jelas dan jelas di sini. Jadi selalu ingat - evaluasi bukan hanya untuk pelatih atau guru atau organisasi atau pembuat kebijakan - evaluasi juga sangat penting bagi pelajar. Yang mungkin merupakan alasan terpenting untuk mengevaluasi orang dengan benar, adil, dan dengan dorongan sebanyak situasi memungkinkan. Sebagian besar konten dan alat khusus di bawah ini untuk evaluasi pelatihan di tempat kerja didasarkan pada karya Leslie Rae, seorang ahli dan penulis tentang evaluasi program pembelajaran dan pelatihan, dan kontribusi ini sangat dihargai. W Leslie Rae telah menulis lebih dari 30 buku tentang pelatihan dan evaluasi pembelajaran - dia adalah seorang ahli di bidangnya. Panduannya untuk evaluasi pelatihan dan pembelajaran yang efektif, kursus pelatihan dan program pembelajaran, adalah serangkaian peraturan dan teknik yang berguna untuk semua pelatih dan profesional SDM. Panduan evaluasi pelatihan ini ditambah dengan serangkaian evaluasi pembelajaran dan alat tindak lanjut yang sangat baik. Diciptakan oleh Leslie Rae. Sebaiknya baca artikel ini sebelum menggunakan alat bantu evaluasi dan pelatihan lanjutan. Terutama lihat catatan di halaman ini tentang penggunaan penilaian diri sendiri dalam mengukur kemampuan sebelum dan sesudah pelatihan (yaitu peningkatan keterampilan dan efektivitas pelatihan) yang secara khusus berhubungan dengan alat 3-Test (dijelaskan dan diberikan di bawah). Lihat juga bagian tentang model evaluasi pelatihan Donald Kirkpatricks. Yang mewakili teori dan prinsip fundamental untuk mengevaluasi pembelajaran dan pelatihan. Lihat juga Blooms Taxonomy of learning domain. Yang menetapkan prinsip dasar untuk desain dan evaluasi pelatihan, dan dengan demikian, efektivitas pelatihan. Teori Erik Eriksons Psikososial (Life Stages) sangat membantu dalam memahami bagaimana kebutuhan pelatihan dan pengembangan masyarakat sesuai dengan usia dan tahap kehidupan. Aspek-aspek generasi ini semakin penting dalam memenuhi kebutuhan masyarakat (sekarang dengan tegas merupakan persyaratan hukum dalam undang-undang diskriminasi usia) dan juga untuk memanfaatkan secara maksimal berbagai kelompok usia yang berbeda dapat menawarkan pekerjaan dan organisasi. Teori Eriksons sangat membantu terutama saat mempertimbangkan kebutuhan dan kemungkinan pengembangan pribadi yang lebih luas di luar keterampilan dan pengetahuan terkait pekerjaan. Teori Multiple Intelligence (bagian mencakup tes mandiri) sangat relevan dengan pelatihan dan pembelajaran. Model ini membantu mengatasi kemampuan alami dan potensi individu yang dapat disembunyikan atau ditekan pada banyak orang (seringkali oleh atasan). Teori Gaya Belajar sangat relevan dengan pelatihan dan pengajaran, dan fitur dalam model Kolbs. Dan dalam model gaya belajar VAK (juga termasuk alat tes mandiri gratis). Teori Gaya Belajar juga berkaitan dengan metode penilaian dan evaluasi, dimana pengujian yang tidak tepat dapat menghasilkan hasil yang sangat buruk. Pengujian, serta pengiriman, harus memperhitungkan gaya belajar masyarakat, misalnya beberapa orang merasa sangat sulit membuktikan kompetensi mereka dalam ujian tertulis, namun bisa menunjukkan kompetensi yang luar biasa saat diminta memberikan demonstrasi fisik. Alat evaluasi berbasis teks bukanlah cara terbaik untuk menilai semua orang. Teori tingkat pembelajaran Kompetensi Sadar juga merupakan perspektif yang membantu bagi peserta didik dan guru. Model ini membantu menjelaskan proses belajar kepada pelatih dan peserta didik, dan juga membantu memperbaiki penilaian tentang kompetensi, karena kompetensi jarang merupakan pertanyaan sederhana tentang dapat atau tidak dapat dilakukan. Model Kompetensi Sadar secara khusus memberikan dorongan kepada guru dan peserta didik saat perasaan frustrasi timbul karena kurangnya kemajuan. Kemajuan tidak selalu mudah dilihat, namun seringkali bisa terjadi. Pelajaran dari (dan mungkin juga untuk) pendidikan anak-anak Meskipun berbagai teori dan model ini terutama dipresentasikan di sini untuk pelatihan yang berorientasi pada orang dewasa, prinsip-prinsip tersebut juga berlaku untuk pendidikan anak-anak dan remaja, yang memberikan beberapa pelajaran dasar yang berguna untuk pelatihan dan pengembangan di tempat kerja. Terutama, sementara penilaian dan penilaian sangat penting tentu saja (karena jika Anda tidak dapat mengukurnya, Anda tidak dapat mengelolanya), hal terpenting adalah melatih dan mengembangkan hal yang benar dengan cara yang benar. Penilaian dan evaluasi (dan pengujian anak-anak) tidak akan memastikan pembelajaran dan pengembangan yang efektif jika pelatihan dan pengembangannya belum dirancang dengan benar. Pelajaran untuk tempat kerja ada dimana-mana Anda melihat pendidikan anak-anak, mohon dimaafkan pengalihan ini. Jika pendidikan anak-anak di Inggris benar-benar berhasil, pemerintah yang berhasil berhasil menghancurkannya pada tahun 1980an, dan telah membuatnya menjadi lebih buruk sejak saat itu. Hal ini dicapai dengan menerapkan berbagai metode ketrampilan dan pengiriman yang sangat sempit, ditambah kriteria dan target pengujian yang berbasis sempit, dan beban administrasi yang merugikan diri sendiri. Semua ini dengan sempurna mencirikan kesombongan dan khayalan yang ditemukan dalam struktur manajemen X-Theory, dalam kasus pegawai negeri dan politisi tinggi dan perkasa ini, yang tidak berada di dunia nyata, dan yang tidak pernah pergi ke sekolah biasa dan juga anak-anaknya yang tidak. Pelajaran besar dari ini untuk organisasi dan pelatihan di tempat kerja adalah bahwa arahan Teori-X dan pemikiran sempit adalah kombinasi bencana. Kebetulan, menurut beberapa orang yang sama ini, masyarakat rusak dan sekolah dan orang tua kita harus disalahkan dan bertanggung jawab untuk memilah-milah kekacauan itu. Menyalahkan korban adalah perilaku klasik lain dari pemerintahan yang tidak kompeten. Masyarakat tidak hancur hanya memiliki beberapa kepemimpinan yang bertanggung jawab, yang merupakan hal menarik lainnya: Kualitas kepemimpinan (pemerintah atau organisasi) didefinisikan oleh bagaimana ia mengembangkan masyarakatnya. Pemimpin yang baik memiliki tanggung jawab untuk membantu orang memahami, mengembangkan dan memenuhi potensi pribadi mereka sendiri. Hal ini sangat berbeda dengan hanya melatih mereka untuk melakukan pekerjaan, atau mengajari mereka untuk lulus ujian dan masuk universitas, yang mengabaikan kebutuhan dan peluang manusia dan masyarakat yang jauh lebih penting. Syukurlah pemikiran pendidikan modern (dan tolong harapankan kebijakan juga) sekarang tampaknya membahas kebutuhan pengembangan anak yang lebih luas, daripada bertujuan hanya untuk mentransfer pengetahuan agar bisa lulus ujian dan ujian. Transfer pengetahuan untuk tujuan lulus ujian dan ujian, terutama bila didasarkan pada gagasan yang sewenang-wenang dan sangat sempit tentang apa yang harus diajarkan dan bagaimana, memiliki sedikit makna atau relevansi dengan potensi pengembangan dan kebutuhan kebanyakan anak muda, dan bahkan relevansi yang kurang. Dengan tuntutan dan peluang dunia modern yang sesungguhnya, apalagi keterampilan hidup yang dibutuhkan untuk menjadi orang dewasa yang percaya diri dapat memberikan kontribusi positif bagi masyarakat. Sistem pendidikan anak-anak Inggris yang sangat cacat dalam tiga puluh tahun terakhir, dan dampak negatifnya pada masyarakat, menawarkan banyak pelajaran bermanfaat bagi organisasi. Mungkin yang paling penting, jika Anda gagal mengembangkan orang sebagai individu, dan hanya bertujuan untuk mentransfer pengetahuan dan keterampilan untuk memenuhi prioritas organisasi pada hari itu, maka Anda akan benar-benar menghambat kesempatan Anda untuk mengembangkan masyarakat produktif yang bahagia di dalam angkatan kerja Anda, dengan asumsi Anda menginginkannya. Untuk, yang saya kira adalah subjek lain sama sekali. Dengan asumsi Anda ingin mengembangkan angkatan kerja yang bahagia dan produktif, berguna untuk mempertimbangkan dan belajar dari kesalahan yang telah dilakukan dalam pendidikan anak-anak: rentang pembelajaran terlalu sempit didefinisikan dan mengabaikan potensi individu, yang kemudian mendevaluasi atau menghalangi Berbagai pembelajaran berfokus pada kriteria sewenang-wenang yang ditetapkan dari perspektif pembuat kebijakan sendiri (pembuat teori X-Theory sombong klasik - para pembuat kebijakannya yang menyesakkan dan menekan) memberikan prioritas terbesar atau eksklusif terhadap kecerdasan akademis yang jelas (membaca, menulis, berhitung, dll) , Ketika kecerdasan multipel lainnya (terutama kemampuan interpersonal dan intrapersonal, terbantu oleh kecerdasan emosional) bisa dibilang memiliki nilai kerja dan masyarakat yang jauh lebih besar (dan tentu saja menyebabkan lebih banyak masalah dalam pekerjaan dan masyarakat jika sedang dikembangkan) pengujian dan penilaian peserta didik Dan guru mengukur hal yang salah, terlalu sempit, dengan cara yang salah - seperti mengukur cuaca dengan termometer Pengujian (sejenis yang salah, walaupun tidak ada yang sesuai untuk ini) digunakan untuk menilai dan menyatakan nilai dasar masyarakat - yang secara jelas mempengaruhi secara langsung harga diri, kepercayaan diri, ambisi, impian, tujuan hidup, dan lain-lain (tidak ada yang terlalu serius saat itu. ) Kebutuhan pengembangan individu yang lebih luas - terutama kebutuhan hidup - diabaikan (banyak organisasi dan pembuat kebijakan pendidikan tampaknya berpikir bahwa orang adalah robot dan bahwa pekerjaan dan kehidupan pribadi mereka tidak terhubung dan pekerjaan itu tidak terpengaruh oleh perasaan baik atau tidak Depresi, dll.) Gaya belajar individu diabaikan (pembelajaran disampaikan terutama melalui membaca dan menulis ketika banyak orang jauh lebih baik dalam belajar melalui pengujian pengalaman, pengamatan, dan lain-lain lihat Kolb dan VAK) yang berfokus pada bukti pengetahuan secara jelas. Situasi tidak adil hanya membantu beberapa tipe orang tertentu, daripada menilai penerapan, interpretasi, dan pengembangan kemampuan masyarakat, yang merupakan kehidupan nyata r Equires (lihat model Kirkpatricks - dan pertimbangkan pentingnya menilai apa yang orang lakukan dengan kemampuan mereka yang meningkat, selain hanya menilai apakah mereka telah mempertahankan teori, yang berarti relatif sedikit) pendidikan anak secara tradisional mengabaikan fakta bahwa mengembangkan orang-orang produktif yang percaya diri bahagia jauh Lebih mudah jika terutama Anda membantu orang untuk menemukan apa yang mereka sukai - apa pun itu - dan kemudian membangunnya. Pengajaran, pelatihan dan pembelajaran harus disesuaikan dengan potensi individu, gaya belajar individu, dan kebutuhan pengembangan kehidupan yang lebih luas. Dan pendekatan individu yang fleksibel secara luas terhadap pembangunan manusia sangat penting bagi tempat kerja, sama seperti sekolah. Kembali untuk mempertimbangkan latihan di tempat kerja itu sendiri, dan karya Leslie Rae: evaluasi pembelajaran dan pelatihan di tempat kerja Ada banyak survei tentang penggunaan evaluasi dalam pelatihan dan pengembangan (lihat contoh temuan penelitian di bawah ini). Sementara survei pada awalnya mungkin tampak menggembirakan, menunjukkan bahwa banyak organisasi pelatih menggunakan evaluasi pelatihan secara ekstensif, ketika pertanyaan yang lebih spesifik dan terperinci diajukan, jika sering terjadi bahwa banyak pelatih profesional dan departemen pelatihan hanya menggunakan hanya reaksi ulang (bentuk umpan balik yang tidak jelas) Termasuk Happy Sheet yang mengenaskan dengan mengandalkan pertanyaan seperti seberapa baik Anda merasakan pelatih itu, dan Betapa menyenangkannya kursus pelatihan. Seperti Kirkpatrick, antara lain, mengajar kita, bahkan reaksi produksi yang diproduksi dengan baik bukanlah merupakan validasi atau evaluasi pelatihan yang tepat. Untuk evaluasi pelatihan dan pembelajaran yang efektif, pertanyaan utama harus: Sejauh mana pelatihan yang diidentifikasi membutuhkan tujuan yang dicapai oleh program Sejauh mana tujuan peserta didik tercapai Apa yang secara khusus dipelajari oleh peserta didik atau dengan mudah diingatkan akan Komitmen apa yang dimiliki peserta didik. Dibuat tentang pembelajaran yang akan mereka terapkan saat mereka kembali bekerja Dan kembali bekerja, Seberapa sukses peserta pelatihan dalam melaksanakan rencana tindakan mereka Sejauh mana mereka didukung oleh manajer lini mereka Sejauh mana tindakan yang tercantum di atas tercapai Return on Investment (ROI) untuk organisasi, baik dalam hal kepuasan sasaran yang teridentifikasi atau, jika mungkin, penilaian moneter. Organisasi biasanya gagal melakukan proses evaluasi ini, terutama di mana: Bagian dan pelatih HR, tidak memiliki cukup waktu untuk melakukannya, andor Departemen HR tidak memiliki sumber daya - orang dan uang yang memadai - untuk melakukannya. Tentunya kain evaluasi harus dipotong sesuai dengan sumber daya yang tersedia (dan atmosfir budaya), yang cenderung bervariasi secara substansial dari satu organisasi ke organisasi lainnya. Faktanya tetap bahwa evaluasi metodis yang baik menghasilkan data yang andal dan dapat diandalkan, di mana sedikit evaluasi dilakukan, sedikit yang diketahui tentang keefektifan pelatihan ini. Evaluasi pelatihan Ada dua faktor utama yang perlu dipecahkan: Siapa yang bertanggung jawab atas proses validasi dan evaluasi Sumber daya waktu, orang dan uang apa yang tersedia untuk tujuan verifikasi validasi (Dalam hal ini, pertimbangkan efek variasi terhadap hal ini, untuk Misalnya, pemotongan anggaran atau tenaga kerja yang tak terduga. Dengan kata lain, mengantisipasi dan merencanakan kontingensi untuk menghadapi variasi.) Tanggung jawab untuk evaluasi pelatihan Secara tradisional, pada intinya, evaluasi atau penilaian lainnya diserahkan kepada pelatih karena itu adalah tugas mereka. . Pendapat saya (Raes) adalah bahwa Quintet Evaluasi Evaluasi harus ada, setiap anggota Quintet memiliki peran dan tanggung jawab dalam prosesnya (lihat Menilai Nilai Pelatihan Anda, Leslie Rae, Gower, 2002). Layanan bibir yang cukup banyak tampaknya harus dibayar untuk ini, namun praktik sebenarnya cenderung jauh lebih sedikit. Evaluasi Pelatihan Quintet yang diadvokasi terdiri dari: Masing-masing memiliki tanggung jawab sendiri, yang dirinci selanjutnya. Manajemen senior - tanggung jawab evaluasi pelatihan Kesadaran akan kebutuhan dan nilai pelatihan bagi organisasi. Perlunya melibatkan Manajer Pelatihan (atau yang setara) dalam rapat manajemen senior di mana keputusan dibuat mengenai perubahan masa depan saat pelatihan akan menjadi penting. Pengetahuan dan dukungan dari rencana pelatihan. Partisipasi aktif dalam acara. Persyaratan untuk evaluasi yang akan dilakukan dan memerlukan laporan ringkasan reguler. Keputusan kebijakan dan strategis berdasarkan hasil dan data ROI. Tanggung jawab evaluasi pelatih - pelatihan Penyediaan setiap program kerja pra-program dan perencanaan program yang diperlukan. Identifikasi pada awal program tingkat pengetahuan dan keterampilan para peserta pelatihan. Penyediaan sumber belajar dan pelatihan agar peserta didik bisa belajar sesuai tujuan dan sasaran peserta didik. Pemantauan pembelajaran seiring berjalannya program. Di akhir program, penilaian dan penerimaan laporan dari peserta didik tingkat belajar tercapai. Memastikan produksi oleh peserta didik dari sebuah rencana tindakan untuk memperkuat, mempraktikkan dan menerapkan pembelajaran. Manajer lini - tanggung jawab evaluasi pelatihan Kebutuhan kerja dan identifikasi orang. Keterlibatan dalam pengembangan program pelatihan dan evaluasi. Dukungan persiapan pra-acara dan mengadakan pertemuan briefing dengan peserta didik. Memberikan dukungan yang terus menerus dan praktis kepada program pelatihan. Mengadakan pertemuan tanya jawab dengan peserta didik saat mereka kembali bekerja untuk mendiskusikan, menyetujui, atau membantu memodifikasi dan menyetujui tindakan untuk rencana tindakan mereka. Meninjau kemajuan pelaksanaan pembelajaran. Peninjauan akhir keberhasilan pelaksanaan dan penilaian, jika memungkinkan, dari ROI. Manajer pelatihan - tanggung jawab evaluasi pelatihan Manajemen departemen pelatihan dan menyetujui kebutuhan pelatihan dan aplikasi program Pemeliharaan minat dan dukungan dalam perencanaan dan pelaksanaan program, termasuk keterlibatan praktis jika diperlukan Pengenalan dan pemeliharaan sistem evaluasi, dan Produksi laporan berkala untuk manajemen senior Sering, kontak yang relevan dengan manajemen senior Berhubungan dengan manajer lini pembelajar dan pengaturan pembelajaran penerapan program pembelajaran tanggung jawab untuk manajer Liaison dengan manajer lini, jika perlu, dalam penilaian ROI pelatihan. Peserta pelatihan atau peserta pelatihan - evaluasi pelatihan Keterlibatan dalam perencanaan dan perancangan program pelatihan jika memungkinkan Keterlibatan dalam perencanaan dan perancangan proses evaluasi jika memungkinkan Jelas, untuk mengambil minat dan bagian aktif dalam program atau kegiatan pelatihan. Untuk menyelesaikan rencana tindakan pribadi selama dan di akhir pelatihan untuk pelaksanaan saat kembali bekerja, dan menerapkannya, dengan dukungan dari manajer lini. Tarik minat dan dukung proses evaluasi. N.B. Meskipun peran utama peserta pelatihan dalam program ini adalah untuk belajar, pelajar harus dilibatkan dalam proses evaluasi. Ini penting, karena tanpa komentar mereka banyak evaluasi tidak dapat terjadi. Pengetahuan dan keterampilan baru tidak akan diterapkan. Bagi trainee untuk mengabaikan tanggung jawab baik bisnis membuang investasinya dalam pelatihan. Trainee akan membantu lebih mudah jika proses tersebut menghindari tampilan dan nuansa latihan pengejaran atau latihan mengunyah angka. Sebagai gantinya, pastikan trainee memahami pentingnya masukan mereka - persis apa dan mengapa mereka diminta untuk melakukannya. Pilihan evaluasi dan validasi pelatihan Seperti yang disarankan sebelumnya apa yang dapat Anda lakukan, daripada apa yang ingin Anda lakukan atau apa yang harus dilakukan, akan tergantung pada berbagai sumber dan dukungan budaya yang ada. Berikut ini ikhtisar spektrum kemungkinan dalam dependensi ini. 1 - tidak melakukan apapun untuk mengukur keefektifan dan hasil dari aktivitas bisnis apa pun tidak pernah merupakan pilihan yang baik, namun mungkin dapat dibenarkan di area pelatihan dalam situasi berikut: Jika organisasi, walaupun diminta, tidak menunjukkan minat pada evaluasi Dan validasi pelatihan dan pembelajaran - dari manajer lini sampai ke dewan direksi. Jika Anda, sebagai pelatih, memiliki proses yang solid untuk merencanakan pelatihan untuk memenuhi kebutuhan organisasi dan pengembangan masyarakat. Jika Anda memiliki tingkat kepastian atau bukti yang masuk akal bahwa pelatihan yang disampaikan sesuai untuk tujuan, mendapat hasil, dan bahwa organisasi (terutama jajaran manajer dan dewan, sumber kritik dan keluhan potensial) merasa bahagia dengan penyediaan pelatihan . Anda memiliki hal-hal yang jauh lebih baik daripada melakukan evaluasi pelatihan, terutama jika evaluasi itu sulit dan kerja sama jarang dilakukan. Namun, dalam keadaan seperti ini, mungkin ada saatnya ketika membuat sistem evaluasi dasar terbukti membantu, misalnya: Anda menerima permintaan mendadak yang tidak terduga untuk pembenaran suatu bagian atau semua aktivitas pelatihan. (Tuntutan ini bisa muncul, misalnya dengan perubahan manajemen, atau kebijakan, atau inisiatif baru). Anda melihat peluang atau kebutuhan untuk menghasilkan pembenaran Anda sendiri (misalnya untuk meningkatkan sumber daya pelatihan, penempatan staf atau anggaran, tempat atau peralatan baru). Anda berusaha mengubah pekerjaan dan membutuhkan bukti efektivitas kegiatan pelatihan masa lalu Anda. Tidak melakukan apapun selalu merupakan pilihan yang paling tidak diinginkan. Kapan pun seseorang yang lebih senior kepada Anda mungkin akan tergerak untuk bertanya Bisakah Anda membuktikan apa yang Anda katakan tentang seberapa sukses Anda Tanpa catatan evaluasi Anda kemungkinan akan kehilangan kata-kata bukti. 2 - tindakan minimal Tindakan yang benar-benar mendasar untuk memulai beberapa bentuk evaluasi adalah sebagai berikut: Di akhir setiap program pelatihan, beri waktu dan dukungan kepada peserta didik dalam bentuk informasi program, dan mintalah peserta didik menyelesaikan sebuah rencana tindakan. Berdasarkan apa yang telah mereka pelajari tentang program dan apa yang ingin mereka terapkan saat mereka kembali bekerja. Rencana aksi ini seharusnya tidak hanya mencakup deskripsi tindakan yang dimaksudkan namun komentar mengenai bagaimana mereka akan menerapkannya, skala waktu untuk memulai dan menyelesaikannya, dan sumber daya yang dibutuhkan, dan lain-lain. Rencana tindakan yang rinci sepenuhnya membantu peserta didik mengkonsolidasikan pikiran. Rencana aksi akan mendapat penggunaan sekunder dalam mendemonstrasikan kepada pelatih, dan siapa pun yang tertarik, jenis dan tingkat pembelajaran yang telah dicapai. Peserta didik juga harus didorong untuk menunjukkan dan mendiskusikan rencana tindakan mereka dengan manajer lini mereka saat kembali bekerja, apakah jenis tindak lanjut ini telah diprakarsai oleh manajer atau tidak. 3 - Tindakan minimal yang diinginkan yang mengarah pada evaluasi Ketika kembali bekerja untuk melaksanakan rencana tindakan, pembelajar idealnya harus didukung oleh manajer lini mereka, daripada meminta tanggung jawab untuk sepenuhnya beristirahat pada pelajar. Manajer lini harus mengadakan pertemuan tanya jawab dengan peserta didik segera setelah mereka kembali bekerja, yang mencakup sejumlah pertanyaan, pada dasarnya membahas dan menyetujui rencana tindakan dan mengatur dukungan untuk pelajar dalam pelaksanaannya. Seperti dijelaskan sebelumnya, ini adalah tanggung jawab yang jelas dari manajer lini, yang menunjukkan kepada manajemen senior, bagian pelatihan dan, tentu tidak sedikit, pelajar, bahwa sikap positif sedang dibawa ke pelatihan. Kontras dengan ini, seperti yang sering terjadi, seorang anggota staf dikirim dalam kursus pelatihan, setelah semua pemikiran tentang tindak lanjut manajemen dilupakan. Pertemuan pendelegasian manajer lini awal bukanlah akhir dari hubungan belajar antara pelajar dan manajer lini. Pada pertemuan awal, tujuan dan dukungan harus disepakati, kemudian pengaturan dibuat untuk tinjauan sementara tentang kemajuan implementasi. Setelah ini bila sesuai, pertemuan tinjauan akhir perlu mempertimbangkan tindakan di masa depan. Proses ini memerlukan tindakan minimal oleh manajer lini - ini tidak lebih dari jenis pengamatan yang dilakukan seperti biasa bagi manajer lini yang memantau tindakan stafnya. Proses pertemuan tinjauan ini memerlukan sedikit usaha dan waktu dari manajer, namun sangat banyak menunjukkan kepada staf bahwa manajer mereka melakukan pelatihan dengan serius. 4 - pendekatan validasi dasar program pelatihan Pendekatan rencana aksi dan implementasi yang dijelaskan di (3) di atas ditempatkan sebagai tanggung jawab peserta didik dan manajer lini mereka, dan, terlepas dari pemberian nasehat dan waktu, tidak memerlukan keterlibatan sumber daya dari Pelatihnya Ada dua bagian lebih lanjut dari sebuah pendekatan yang juga hanya memerlukan waktu bagi peserta didik untuk menggambarkan perasaan dan informasi mereka. Yang pertama adalah reaksioner yang mencari pandangan, pendapat, perasaan, dll tentang peserta didik tentang program ini. Ini tidak pada tingkat lembar bahagia, atau daftar centang sederhana - tapi yang memungkinkan perasaan realistis untuk dinyatakan. Jenis reaksioner ini dijelaskan dalam buku ini (Menilai Nilai Pelatihan Anda, Leslie Rae, Gower, 2002). Evaluasi ini mencari skor untuk setiap pertanyaan terhadap rentang 6 poin Good to Bad, dan juga alasan pembelajar untuk mendapatkan skor, yang sangat penting jika skornya rendah. Reactionnaires seharusnya bukan acara otomatis pada setiap kursus atau program. Evaluasi semacam ini dapat dipesan untuk program baru (misalnya, tiga kejadian pertama) atau bila ada indikasi ada yang salah dengan program ini. Contoh reaksioner tersedia dalam rangkaian alat evaluasi pelatihan gratis. Alat evaluasi berikutnya, seperti action plan, harus digunakan di akhir setiap kursus jika memungkinkan. Ini adalah Kuesioner Pembelajaran (Learning Questionnaire / LQ), yang bisa menjadi instrumen yang relatif sederhana yang menanyakan kepada peserta didik apa yang telah mereka pelajari tentang program ini, apa yang telah mereka khawatirkan, dan apa yang tidak disertakan sehingga mereka diharapkan untuk disertakan, atau akan memiliki Suka disertakan. Rentang penilaian bisa dimasukkan, tapi ini minimal dan berada di bawah komentar teks yang dibuat oleh peserta didik. Ada alternatif untuk LQ yang disebut Key Objectives LQ (KOLQ) yang mencari jumlah pembelajaran yang dicapai dengan mengajukan pertanyaan yang relevan terhadap daftar Key Keyives yang dihasilkan untuk program ini. Ketika sebuah reaksioner dan LQKOLQ digunakan, mereka tidak boleh diajukan dan dilupakan pada akhir program, seperti juga kecenderungan umum, namun digunakan untuk menghasilkan ringkasan evaluasi dan validasi pelatihan. Ringkasan evaluasi berbasis fakta diperlukan untuk mendukung klaim bahwa sebuah program adalah hasil yang baik untuk memenuhi tujuan yang ditetapkan. Ringkasan penilaian juga dapat membantu publisitas untuk program pelatihan, dll. Contoh Kuesioner Pembelajaran dan Tujuan Kunci Kuesioner Pembelajaran disertakan dalam perangkat evaluasi gratis. 5 - proses evaluasi total Jika diperlukan proses yang dijelaskan dalam (3) dan (4) dapat dikombinasikan dan dilengkapi dengan metode lain untuk menghasilkan proses evaluasi penuh yang mencakup semua kemungkinan. Beberapa kesempatan atau lingkungan memungkinkan proses penuh ini diterapkan, terutama bila tidak ada dukungan Quintet, namun ini adalah tujuan akhir. Prosesnya dirangkum di bawah ini: Pelatihan membutuhkan identifikasi dan penetapan tujuan oleh organisasi Perencanaan, perancangan dan penyusunan program pelatihan terhadap tujuan Identifikasi pra-kursus orang dengan kebutuhan dan penyelesaian persiapan yang dipersyaratkan oleh program pelatihan Penyediaan yang telah disepakati Program pelatihan Pertemuan briefing pra-kursus antara pelajar dan manajer lini Pra-kursus atau mulai identifikasi program pengetahuan, keterampilan, dan sikap peserta didik, (alat uji coba tiga tahap sebelum dan sesudah pelatihan dan versi manual (pdf) dan versi manual (Xls) dan versi file kerja - (Saya berterima kasih kepada F Tarek untuk berbagi file pdf ini - versi tiga terjemahan Arab dan alat yang sama seperti file doc - versi tiga terjemahan bahasa Arab). Validasi sementara sebagai hasil program Penilaian Pengetahuan terminal, ketrampilan, dll dan penyelesaian penilaian persepsi persepsi (alat uji 3 uji dan versi manual dan file kerja Rsion) Penyelesaian reaksioner akhir program Penyelesaian Kuesioner Pembelajaran akhir-akhir atau Tujuan Utama Kuesioner Pembelajaran Penyelesaian Rencana Aksi Pertemuan tanya jawab pasca-kursus antara pelajar dan manajer lini Manajer garis observasi kemajuan pelaksanaan Tinjau rapat untuk membahas kemajuan Implementasi Pertemuan review implementasi akhir Penilaian ROI Apapun yang Anda lakukan, lakukan sesuatu. Proses yang dijelaskan di atas memungkinkan lintang yang cukup besar tergantung pada sumber daya dan lingkungan budaya, jadi selalu ada kesempatan untuk melakukan sesuatu - tentu saja alat yang digunakan dan pendekatan yang lebih luas, evaluasi yang lebih berharga dan efektif. Namun menjadi pragmatis. Program kritis mahal yang besar akan selalu membenarkan evaluasi dan pengawasan lebih banyak daripada kegiatan pelatihan kecil, satu kali, dan tidak penting. Dimana ada investasi dan harapan yang berat, maka evaluasi harus cukup rinci dan lengkap. Manajer pelatihan terutama harus mengklarifikasi harapan pengukuran dan evaluasi dengan manajemen senior sebelum memulai kegiatan pelatihan baru yang substansial, sehingga proses evaluasi yang tepat dapat dilakukan bila program itu dirancang. Bila program besar dan berpotensi kritis direncanakan, manajer pelatihan harus salah dalam mengingatkan - memastikan bahwa proses evaluasi yang memadai tersedia. Seperti halnya investasi apapun, seorang eksekutif senior selalu bertanya, Apa yang kita dapatkan untuk investasi kita, dan kapan dia bertanya, manajer pelatihan harus bisa memberikan tanggapan yang rinci. Pengukuran perbaikan menggunakan self-assessment Contoh uji coba sebelum uji coba 3-test (lihat versi manual (pdf) dan versi manual (xls) dan versi file kerja) adalah alat yang berguna dan ilustrasi yang sangat membantu mengenai tantangan dalam mengukur peningkatan kemampuan. Setelah pelatihan, gunakan self-assessment. Unsur penting dalam alat ini adalah penilaian yang disebut kemampuan pra-terlatih yang telah direvisi, yang dilakukan setelah pelatihan. Kemampuan pra-terlatih yang telah direvisi adalah penilaian ulang yang akan dilakukan setelah pelatihan tingkat kemampuan yang ada sebelum pelatihan. Hal ini biasanya berbeda secara signifikan dengan penilaian kemampuan yang dilakukan sebelum pelatihan, karena secara implisit, kita tidak sepenuhnya memahami kompetensi dan kemampuan dalam keterampilan sebelum kita dilatih di dalamnya. Orang biasanya terlalu memperkirakan kemampuan mereka sebelum berlatih. Setelah melatih banyak orang menyadari bahwa mereka sebenarnya memiliki kompetensi yang lebih rendah daripada yang mereka percaya sebelumnya (yaitu sebelum menerima pelatihan). Penting untuk memungkinkan hal ini ketika mencoba mengukur perbaikan nyata dengan menggunakan self-assessment. Inilah alasan untuk merevisi (setelah pelatihan) penilaian kemampuan yang telah dilakukan sebelumnya. Selain itu, dalam banyak situasi setelah pelatihan, gagasan orang-orang tentang kompetensi dalam keterampilan tertentu dapat berkembang dengan sangat baik. Mereka menyadari betapa besar dan kompleksnya subjek dan mereka menjadi lebih sadar akan kemampuan dan kesempatan nyata mereka untuk memperbaiki diri. Because of this it is possible for a person before training to imagine (in ignorance) that they have a competence level of say 7 out of 10. After training their ability typically improves, but also so does their awareness of the true nature of competency . and so they may then judge themselves - after training - only to be say 8 or 7 or even lower at 6 out of 10. This looks like a regression. Its not of course, which is why a reassessment of the pre-trained ability is important. Extending the example, a persons revised assessment of their pre-trained ability could be say 3 or 4 out of 10 (revised downwards from 710), because now the person can make an informed (revised) assessment of their actual competence before training. A useful reference model in understanding this is the Conscious Competence learning model. Before we are trained we tend to be unconsciously incompetent (unaware of our true ability and what competence actually is). After training we become more consciously aware of our true level of competence, as well as hopefully becoming more competent too. When we use self-assessment tools it is important to allow for this, hence the design of the 3-Test before-and-after training tool - see also manual version (pdf) and manual version (xls). In other words: In measuring improvement, using self-assessment, between before and after training it is useful first to revise our pre-trained assessment, because before training usually our assessment of ability is over-optimistic, which can suggest (falsely) an apparent small improvement or even regression (because we thought we were more skilled than we actually now realise that we were). Note that this self-assessment aspect of learning evaluation is only part of the overall evaluation which can be addressed. See Kirkpatricks learning evaluation model for a wider appreciation of the issues. the trainers overall responsibilities - aside from training evaluation Over the years the trainers roles have changed, but the basic purpose of the trainer is to provide efficient and effective training programmes. The following suggests the elements of the basic role of the trainer, but it must be borne in mind that different circumstances will require modifications of these activities. 1. The basic role of a trainer (or however they may be designated) is to offer and provide efficient and effective training programmes aimed at enabling the participants to learn the knowledge, skills and attitudes required of them. 2. A trainer plans and designs the training programmes, or otherwise obtains them (for example, distance learning or e-technology programmes on the Internet or on CDDVD), in accordance with the requirements identified from the results of a TNIA (Training Needs Identification and Analysis - or simply TNA, Training Needs Analysis) for the relevant staff of an organizations or organizations. 3. The training programmes cited at (1) and (2) must be completely based on the TNIA which has been: (a) completed by the trainer on behalf of and at the request of the relevant organization (b) determined in some other way by the organization. 4. Following discussion with or direction by the organization management who will have taken into account costs and values (e.g. ROI - Return on Investment in the training), the trainer will agree with the organization management the most appropriate form and methods for the training. 5. If the appropriate form for satisfying the training need is a direct training course or workshop, or an Intranet provided programme, the trainer will design this programme using the most effective approaches, techniques and methods, integrating face-to-face practices with various forms of e-technology wherever this is possible or desirable. 6. If the appropriate form for satisfying the training need is some form of open learning programme or e-technology programme, the trainer, with the support of the organization management obtain, plan the utilization and be prepared to support the learner in the use of the relevant materials. 7. The trainer, following contact with the potential learners, preferably through their line managers, to seek some pre-programme activity andor initial evaluation activities, should provide the appropriate training programme(s) to the learners provided by their organization(s). During and at the end of the programme, the trainer should ensure that: (a) an effective form of traininglearning validation is followed (b) the learners complete an action plan for implementation of their learning when they return to work. 8. Provide, as necessary, having reviewed the validation results, an analysis of the changes in the knowledge, skills and attitudes of the learners to the organization management with any recommendations deemed necessary. The review would include consideration of the effectiveness of the content of the programme and the effectiveness of the methods used to enable learning, that is whether the programme satisfied the objectives of the programme and those of the learners. 9. Continue to provide effective learning opportunities as required by the organization. 10. Enable their own CPD (Continuing Professional Development) by all possible developmental means - training programmes and self-development methods. 11. Arrange and run educative workshops for line managers on the subject of their fulfillment of their training and evaluation responsibilities. Dependant on the circumstances and the decisions of the organization management, trainers do not, under normal circumstances: 1. Make organizational training decisions without the full agreement of the organizational management. 2. Take part in the post-programme learning implementation or evaluation unless the learners line managers cannot or will not fulfil their training and evaluation responsibilities. Unless circumstances force them to behave otherwise, the trainers role is to provide effective training programmes and the role of the learners line managers is to continue the evaluation process after the training programme, counsel and support the learner in the implementation of their learning, and assess the cost-value effectiveness or (where feasible) the ROI of the training. Naturally, if action will help the trainers to become more effective in their training, they can take part in but not run any pre- and post-programme actions as described, always remembering that these are the responsibilities of the line manager. leslie raes further references and recommended reading Annett, Duncan, Stammers and Gray, Task Analysis, Training Information Paper 6, HMSO, 1971. Bartram, S. and Gibson, B. Training Needs Analysis, 2nd edition, Gower, 1997. Bartram, S. and Gibson, B. Evaluating Training, Gower, 1999. Bee, Frances and Roland, Training Needs Analysis and Evaluation, Institute of Personnel and Development, 1994. Boydell, T. H. A Guide to the Identification of Training Needs, BACIE, 1976. Boydell, T. H. A Guide to Job Analysis, BACIE, 1970. A companion booklet to A Guide to the Identification of Training Needs. Bramley, Peter, Evaluating Training Effectiveness, McGraw-Hill, 1990. Buckley, Roger and Caple, Jim, The Theory and Practice of Training, Kogan Page, 1990.(Chapters 8 and 9) Craig, Malcolm, Analysing Learning Needs, Gower, 1994. Davies, I. K. The Management of Learning, McGraw-Hill, 1971. (Chapters 14 and 15.) Easterby-Smith, M. Braiden, E. M. and Ashton, D. Auditing Management Development, Gower, 1980. Easterby-Smith, M. How to Use Repertory Grids in HRD, Journal of European Industrial Training, Vol 4, No 2, 1980. Easterby-Smith, M. Evaluating Management Development, Training and Education, 2nd edition, Gower, 1994. Fletcher, Shirley, NVQs Standards and Competence, 2nd edition, Kogan Page, 1994. Hamblin, A. C. The Evaluation and Control of Training, McGraw-Hill, 1974. Honey, P. The Repertory Grid in Action, Industrial and Commercial Training, Vol II, Nos 9, 10 and 11, 1979. ITOL, A Glossary of UK Training and Occupational Learning Terms, ed. J. Brooks, ITOL, 2000. Kelly, G.A. The Psychology of Personal Constructs, Norton, 1953. Kirkpatrick, D. L. Evaluation of Training, in Training and Development Handbook, edited by R. L. Craig, McGraw-Hill, 1976. Kirkpatrick, D.L. Evaluating Training Programs: The four levels, Berrett-Koehler, 1996. Laird, D. Approaches to Training and Development, Addison-Wesley, 1978. (Chapters 15 and 16.) Mager, R. F. Preparing Objectives for Programmed Instruction, Fearon, 1962. (Later re-titled: Preparing Instructional Objectives, Fearon, 1975.) Manpower Services Commission, A Glossary of Training Terms, HMSO, 1981. Newby, Tony, Validating Your Training, Kogan Page Practical Trainer Series, 1992. Odiorne, G. S. Training by Objectives, Macmillan, 1970. Parker, T. C. Statistical Methods for Measuring Training Results, in Training and Development Handbook, edited by R. L. Craig, McGraw-Hill, 1976. Peterson, Robyn, Training Needs Analysis in the Workplace, Kogan Page Practical Trainer Series, 1992. Philips, J. Handbook of Training Evaluation and Measurement, 3rd edition, Butterworth-Heinemann, 1977 Philips, J. Return on Investment in training and Performance Improvement Programs. Butterworth-Heinemann, 1977 Philips, P.P.P. Understanding the Basics of Return on Investment in Training, Kogan-Page,2002 Prior, John (ed.), Handbook of Training and Development, 2nd edition, Gower, 1994. Rackham, N. and Morgan, T. Behaviour Analysis in Training, McGraw-Hill, 1977. Rackham, N. et al. Developing Interactive Skills, Wellens, 1971. Rae, L. Towards a More Valid End-of-Course Validation, The Training Officer, October 1983. Rae, L. The Skills of Human Relations Training, Gower, 1985. Rae, L. How Valid is Validation, Industrial and Commercial Training, Jan.-Feb. 1985. Rae, L. Using Evaluation in Training and Development, Kogan Page, 1999. Rae, L. Effective Planning in Training and Development, Kogan Page, 2000. Rae, L. Training Evaluation Toolkit, Echelon Learning, 2001. Rae, L. Trainer Assessment, Gower, 2002. Rae, L. Techniques of Training, 3rd edition, Gower, 1995. (Chapter 10.) Robinson, K. R. A Handbook of Training Management, Kogan Page, 1981. (Chapter 7.) Schmalenbach, Martin, The Death of ROI and the Rise of a New Management Paradigm, Journal of the Institute of Training and Occupational Learning, Vol. 3, No.1, 2002. Sheal, P. R. How to Develop and Present Staff Training Courses, Kogan Page, 1989. Smith, M. and Ashton, D. Using Repertory Grid Techniques to Evaluate Management Training, Personnel Review, Vol 4, No 4, 1975. Stewart, V. and Stewart A. Managing the Managers Growth, Gower, 1978. (Chapter 13.) Thurley, K. E. and Wirdenius, H. Supervision: a Re-appraisal, Heinemann, 1973. Warr, P. B. Bird, M. and Rackham, N. The Evaluation of Management Training, Gower, 1970. Whitelaw, M. The Evaluation of Management Training: a Review, Institute of Personnel Management, 1972. Wills, Mike, Managing the Training Process, McGraw-Hill, 1993. The core content and tools relating to workplace training evaluation is based on the work of Leslie Rae, MPhil, Chartered FCIPD, FITOL, which is gratefully acknowledged. Leslie Rae welcomes comments and enquiries about the subject of training and its evaluation, and can be contacted via businessballs or direct: Wrae804418 at aol dot com a note about ROI (return on investment) in training Attempting financial ROI assessment of training is a controversial issue. Its a difficult task to do in absolute terms due to the many aspects to be taken into account, some of which are very difficult to quantify at all, let alone to define in precise financial terms. Investment - the cost - in training may be easier to identify, but the benefits - the return - are notoriously tricky to pin down. What value do you place on improved morale Reduced stress levels Longer careers Better qualified staff Improved time management All of these can be benefits - returns - on training investment. Attaching a value and relating this to a single cause, i.e. training, is often impossible. At best therefore, many training ROI assessments are necessarily best estimates. If ROI-type measures are required in areas where reliable financial assessment is not possible, its advisable to agree a best possible approach, or a notional indicator and then ensure this is used consistently from occasion to occasion . year on year, course to course, allowing at least a comparison of like with like to be made, and trends to be spotted, even if financial data is not absolutely accurate. In the absence of absolutely quantifiable data, find something that will provide a useful if notional indication. For example, after training sales people, the increased number and value of new sales made is an indicator of sorts. After motivational or team-building training, reduced absentee rates would be an expected output. After an extensive management development programme, the increase in internal management promotions would be a measurable return. Find something to measure, rather than say it cant be done at all, but be pragmatic and limit the time and resource spent according to the accuracy and reliability of the input and output data. Also, refer to the very original Training Needs Analysis that prompted the training itself - what were the business performance factors that the training sought to improve Use these original drivers to measure and relate to organizational return achieved. The problems in assessing ROI are more challenging in public and non-profit-making organizations - government departments, charities, voluntary bodies, etc. ROI assessment in these environments can be so difficult as to be insurmountable, so that the organization remains satisfied with general approximations or vague comparisons, or accepts wider forms of justification for the training without invoking detailed costing. None of this is to say that cost- and value-effectiveness assessment should not be attempted. At the very least, direct costs must be controlled within agreed budgets, and if it is possible, attempts at more detailed returns should be made. It may be of some consolation to know that Jack Philips, an American ROI guru, recently commented about training ROI: Organisations should be considering implementing ROI impact studies very selectively on only 5 to 10 per cent of their training programme, otherwise it becomes incredibly expensive and resource intensive. training evaluation research This research extract is an example of the many survey findings that indicate the need to improve evaluation of training and learning. It is useful to refer to the Kirkpatrick Learning Evaluation model to appreciate the different stages at which learning and training effectiveness should be evaluated. Research published the UKs British Learning Association in May 2006 found that 72 (of a representative sample) of the UKs leading learning professionals considered that learning tends not to lead to change . Only 51 of respondents said that learning and training was evaluated several months after the learning or training intervention . The survey was carried out among delegates of the 2006 conference of the UKs British Learning Association. Speaking on the findings, David Wolfson, Chairman of the British Learning Association said, These are worrying figures from the countrys leading learning professionals. If they really do reflect training in the UK, then we have to think long and hard about how to make the changes that training is meant to give. It suggests that we have to do more - much more - to ensure that learning interventions really make a difference. The British Learning Association is a centre of expertise that produces best practice examples, identifies trends and disseminates information on both innovative and well-established techniques and technologies for learning. The aim is to synthesise existing knowledge, develop original solutions and disseminate this to a wide cross sector membership. There are many different ways to assess and evaluate training and learning. Remember that evaluation is for the learner too - evaluation is not just for the trainer or organisation. Feedback and test results help the learner know where they are, and directly affect the learners confidence and their determination to continue with the development - in some cases with their own future personal development altogether. Central to improving training and learning is the question of bringing more meaning and purpose to peoples lives . aside from merely focusing on skills and work-related development and training courses. Learning and training enables positive change and improvement - for people and employers - when peoples work is aligned with peoples lives - their strengths, personal potential, goals and dreams - outside work as well as at work. Evaluation of training can only effective if the training itself is effective and appropriate. Testing the wrong things in the wrong way will give you unhelpful data, and could be even more unhelpful for learners. Consider peoples learning styles when evaluating personal development. Learning styles are essentially a perspective of peoples preferred working, thinking and communicating styles. Written tests do not enable all types of people to demonstrate their competence. Evaluating retention of knowledge only is a very limited form of assessment. It will not indicate how well people apply their learning and development in practice. Revisit Kirkpatricks Theory and focus as much as you can on how the learning and development is applied, and the change and improvements achieved, in the working situation. See the notes about organizational change and ethical leadership to help understand and explain these principles further, and how to make learning and development more meaningful and appealing for people. authorshipreferencing copy leslie rae content main workplace learning evaluation content and tools 2004-13 alan chapman edit and contextual materials 2004-2013Evaluating Training and Results (ROI of Training) Also See the Librarys Blogs Related to Evaluating Training and Result (ROI) In addition to the articles on this current page, also see the following blogs that have posts related to Evaluating Training and Results (ROI). Scan ke halaman blog untuk melihat berbagai posting. Juga lihat bagian quotRecent Blog Postsquot di sidebar blog atau klik pada quotnextquot di dekat bagian bawah sebuah posting di blog. Blog juga terhubung ke banyak sumber terkait gratis. Preparation for Evaluating Training Activities and Results The last phase of the ADDIE model of instructional design, or systematic training, is evaluation. However, the evaluation really should have started even during the previous phase -- the implementation phase -- because the evaluation is of both the activities of the trainer as they are being implemented and of the results of the training as it nears an end or is finished. Evaluation includes getting ongoing feedback, e.g. from the learner, trainer and learners supervisor, to improve the quality of the training and identify if the learner achieved the goals of the training. Sebelum berkembang melalui panduan dalam topik ini, pembaca akan mendapatkan keuntungan dari pertama meninjau informasi tentang pelatihan formal dan sistematis, terutama model ADDIE, pada Proses Pelatihan Formal - Desain Sistem Instruksional (ISD) dan ADDIE. Then scan the contents of the fourth phase of the ADDIE model systematic planning of training, Implementing Your Training Plan. (This evaluation phase is the fifth phase of the ADDIE model.) Also, note that there is a document, Complete Guidelines to Design Your Training Plan. Yang mengembunkan panduan dari berbagai topik tentang rencana pelatihan untuk membimbing Anda mengembangkan rencana pelatihan. That document also provides a Framework to Design Your Training Plan that you can use to document the various aspects of your plan Perspective on Evaluating Training Evaluation is often looked at from four different levels (the quotKirkpatrick levelsquot) listed below. Note that the farther down the list, the more valid the evaluation. Reaction - What does the learner feel about the training Learning - What facts, knowledge, etc. did the learner gain Behaviors - What skills did the learner develop, that is, what new information is the learner using on the job Results or effectiveness - What results occurred, that is, did the learner apply the new skills to the necessary tasks in the organization and, if so, what results were achieved Although level 4, evaluating results and effectiveness, is the most desired result from training, its usually the most difficult to accomplish. Evaluating effectiveness often involves the use of key performance measures -- measures you can see, e.g. faster and more reliable output from the machine after the operator has been trained, higher ratings on employees job satisfaction questionnaires from the trained supervisor, etc. This is where following sound principles of performance management is of great benefit. Suggestions for Evaluating Training Typically, evaluators look for validity, accuracy and reliability in their evaluations. However, these goals may require more time, people and money than the organization has. Evaluators are also looking for evaluation approaches that are practical and relevant. Training and development activities can be evaluated before, during and after the activities. Consider the following very basic suggestions: Before the Implementation Phase Will the selected training and development methods really result in the employees learning the knowledge and skills needed to perform the task or carry out the role Have other employees used the methods and been successful Consider applying the methods to a highly skilled employee. Ask the employee of their impressions of the methods. Do the methods conform to the employees preferences and learning styles Have the employee briefly review the methods, e.g. documentation, overheads, etc. Does the employee experience any difficulties understanding the methods During Implementation of Training Ask the employee how theyre doing. Do they understand whats being said Periodically conduct a short test, e.g. have the employee explain the main points of what was just described to him, e.g. in the lecture. Is the employee enthusiastically taking part in the activities Is he or she coming late and leaving early. Its surprising how often learners will leave a course or workshop and immediately complain that it was a complete waste of their time. Ask the employee to rate the activities from 1 to 5, with 5 being the highest rating. If the employee gives a rating of anything less than 5, have the employee describe what could be done to get a 5. After Completion of the Training Give him or her a test before and after the training and development, and compare the results Interview him or her before and after, and compare results Watch him or her perform the task or conduct the role Assign an expert evaluator from inside or outside the organization to evaluate the learners knowledge and skills One Approach to Calculate Return On Investment (ROI) (This section was written by Leigh Dudley. The section mentions HRD -- activities of human resource development -- but the guidelines are as applicable to training and development.) The calculation of ROI in training and development or HRD begins with the basic model, where sequential steps simplify a potentially complicated process. The ROI process model provides a systematic approach to ROI calculations. The step-by-step approach keeps the process manageable so that users can tackle one issue at a time. The model also emphasizes that this is a logical process that flows from one step to another. ROI calculation to another provides consistency, understanding, and credibility. Each step of the model is briefly described below. Collecting Post-Program Data Data collection is central to the ROI process and is the starting point of the ROI process. Although the ROI analysis is (or should be) planned early in the training and development cycle, the actual ROI calculation begins with data collection. (Additional information on planning for the ROI analysis is presented later under 8220Essential Planning Steps). The HRD staff should collect both hard data (representing output, quality, cost, and time) and soft data (including work habits, work climate, and attitudes). Collect Level 4 data using a variety of the methods as follows: Follow-up Questionnaires 8211 Administer follow-up questionnaires to uncover specific applications of training. Participants provide responses to a variety of types of open-ended and forced response questions. Use questionnaires to capture both Level 3 and Level 4 data. The example below shows a series of level 4 impact questions contained in a follow-up questionnaire for evaluating an automotive manufacturer8217s sales training program in Europe, with appropriate responses. HRD practitioners can use the data in an ROI analysis Program Assignments 8211 Program assignments are useful for simple, short-term projects. Participants complete the assignment on the job, using the skills or knowledge learned in the program. Report completed assignments as evaluation information, which often contains Level 3Level 4 data. Convert Level 4 data to monetary values and compare the data to cost to develop the ROI Action Plans 8211 Developed in training and development programs, action plans on the job should be implemented after the program is completed. A follow-up of the plans provides evaluation information. Level 3Level 4 data are collected with action plans, and the HRD staff can develop the ROI from the Level 4 data. Performance Contracts 8211 Developed prior to conducting the program and when the participant, the participant8217s supervisor, and the instructor all agree on planned specific out-comes from the training, performance contracts outline how the program will be implemented. Performance contracts usually collect both Level 3and Level 4 data and are designed and analyzed in the same way as action plans. Performance Monitoring 8211 As the most beneficial method to collect Level 4 data, performance monitoring is useful when HRD personnel examine various business performance records and operational data for improvement. The important challenge in this step is to select the data collection method or methods that are appropriate for both the setting and the specific program and the time and budget constraints. Isolating the Effects of Training Isolating the effects of training is an often overlooked issue in evaluations. In this step of the ROI process, explore specific techniques to determine the amount of output performance directly related to the program. This step is essential because many factors influence performance data after training. The specific techniques of this step will pinpoint the amount of improvement directly related to the program, increasing the accuracy and credibility of the ROI calculation. Collectively, the following techniques provide a comprehensive set of tools to tackle the important and critical issue of isolating the effects of training. Control Group 8211 use a control group arrangement to isolate training impact. With this technique, one group receives training while another similar, group does not receive training. The difference in the performance of the two groups is attributed to the training program. When properly set up and implemented, control group arrangement is the most effective way to isolate the effects of training. Impact Estimates 8211 When the previous approach is not feasible, estimating the impact of training on the output variables is another approach and can be accomplished on the following 4 levels. Participants 8211 estimate the amount of improvement related to training. In this approach, provide participants with the total amount of improvement, on a pre- and post-program basis, and ask them to indicate the percent of the improvement that is actually related to the training program. Supervisors 8211 of participants estimate the impact of training on the output variables. Present supervisors with the total amount of improvement, and ask them to indicate the percent related to training. Senior Managers 8211 estimate the impact of training by providing an estimate or adjustment to reflect the portion of the improvement related to the training program. While perhaps inaccurate, having senior management involved in this process develops ownership of the value and buy-in process. Experts 8211estimate the impact of training on the performance variable. Because these estimates are based on previous experience, experts must be familiar with the type of training and the specific situation. Customers sometimes provide input on the extent to which training has influenced their decision to use a product or service. Although this approach has limited applications, it can be quite useful in customer service and sales training. Converting Data to Monetary Values A number of techniques are available to convert data to monetary values the selection depends on the type of data and the situation. Convert output data to profit contribution or cost savings. With this technique, output increases are converted to monetary value based on their unit contribution to profit or the unit of cost reduction. These values are readily available in most organizations and are seen as generally accepted standard values. Calculate the cost of quality, and covert quality improvements directly to cost savings. This standard value is available in many organizations for the most common quality measures (such as rejects, rework, and scrap). Use the participants8217 wages and employee benefits as the value for time in programs where employee time is saved. Because a variety of programs focus on improving the time required to complete projects, processes, or daily activities, the value of time becomes an important and necessary issue. The use of total compensation per hour provides a conservative estimate for the value of time. Use historical costs when they are available for a specific variable. In this case, use organizational cost data to establish the specific value of an improvement. Use internal and external experts, when available, to estimate a value for an improvement. In this situation, the credibility of the estimate hinges on the expertise and reputation of the individual. Use external databases, when available, to estimate the value or cost of data items. Research, government, and industry databases can provide important for these values. The difficulty lies in finding a specific database related to the situation. Ask participants to estimate the value of the data item. For this approach to be effective, participants must understand the process and be capable of providing a value for the improvement. Require supervisors and managers to provide estimates when they are willing and capable of assigning values to the improvement. This approach is especially useful when participants are not fully capable of providing this input or in situations where supervisors or managers need to confirm or adjust the participant8217s estimate. Converting data to monetary value is very important in the ROI model and is absolutely necessary to determine the monetary benefits from a training program. The process is challenging, particularly with the conversion of soft data, but can be methodically accomplished using one or more of the above techniques. Tabulating Program Costs The other part of the equation in a costbenefit analysis is the cost of the program. Tabulating the costs involves monitoring or developing all of the related costs of the program targeted for the ROI calculation. Include the following items among the cost components. Cost to design and develop the program, possibly prorated over the expected life of the program Cost of all program materials provided to each participant Cost for the instructorfacilitator, including preparation time as well as delivery time. Cost of the facilities for the training program. Cost of travel, lodging and meals for the participants, if applicable. Salaries, plus employee benefits of the training function, allocated in some convenient way. In addition, specific cost related to the needs assessment and evaluation should be included, if appropriate. The conservative approach is to include all of these costs so that the total is fully loaded. Calculating the ROI Calculate the ROI using the program benefits and costs. The BCR is the program benefits divided by costs: BCR program benefits program costs (Sometimes this ratio is stated as a costbenefit ratio, although the formula is the same as BCR). The net benefits are the program benefits minus the costs: Net benefits program benefits 8211 program costs The ROI uses the net benefits divided by programs costs: ROI () net benefits program costs x 100 Use the same basic formula in evaluating other investments where the ROI is traditionally reported as earnings divided by investment. The ROI from some training programs is high. For example, in sales training, supervisory training, and managerial training, the ROI can be quite large, frequently over 100 percent, while ROI value for technical and operator training may be lower. Additional Resources to Guide Evaluation of Your Training Evaluating Online Learning Also See the Librarys Blogs Related to this Topic In addition to the articles on this current page, also see the following blogs that have posts related to this topic. Scan ke halaman blog untuk melihat berbagai posting. Juga lihat bagian quotRecent Blog Postsquot di sidebar blog atau klik pada quotnextquot di dekat bagian bawah sebuah posting di blog. Blog juga terhubung ke banyak sumber terkait gratis. Untuk Kategori Pelatihan dan Pengembangan: Untuk melengkapi pengetahuan Anda tentang topik Perpustakaan ini, Anda mungkin ingin meninjau beberapa topik terkait, tersedia dari link di bawah ini. Masing-masing topik terkait mencakup sumber online gratis. Juga, pindai Buku-buku Rekomendasi yang tercantum di bawah ini. Mereka telah dipilih untuk relevansi dan sifatnya yang sangat praktis. Buku Petunjuk Dasar dan Informasi Umum Panduan Lapangan untuk Kepemimpinan dan Pengawasan dalam Bisnis oleh Carter McNamara, diterbitkan oleh Authenticity Consulting, LLC. Menyediakan panduan langkah-demi-langkah, sangat praktis untuk merekrut, memanfaatkan dan mengevaluasi karyawan terbaik untuk bisnis Anda. Termasuk panduan untuk memimpin Anda secara efektif (sebagai anggota dewan atau karyawan), individu, kelompok dan organisasi lainnya. Termasuk panduan untuk menghindari kelelahan - masalah yang sangat umum di kalangan karyawan usaha kecil. Banyak materi dalam topik Perpustakaan tentang staf ini disesuaikan dari buku ini. Panduan Lapangan untuk Kepemimpinan dan Pengawasan Staf Nirlaba oleh Carter McNamara, diterbitkan oleh Authenticity Consulting, LLC. Menyediakan panduan langkah-demi-langkah, sangat praktis untuk merekrut, memanfaatkan dan mengevaluasi anggota staf terbaik untuk nirlaba Anda. Termasuk panduan untuk memimpin Anda secara efektif (sebagai anggota dewan atau anggota staf), individu, kelompok dan organisasi lainnya. Termasuk panduan untuk menghindari kelelahan - masalah yang sangat umum di kalangan staf nirlaba. Banyak materi dalam topik Perpustakaan tentang staf ini disesuaikan dari buku ini. Buku berikut direkomendasikan karena sifatnya yang sangat praktis dan seringkali karena mencakup berbagai informasi tentang topik Perpustakaan ini. Untuk mendapatkan informasi lebih lanjut tentang setiap buku, cukup klik pada gambar buku. Juga, gelembung informasi mungkin ditampilkan. Anda bisa mengklik judul buku di gelembung itu untuk mendapatkan lebih banyak informasi juga. Orientasi dan Pelatihan Karyawan Buku-buku berikut direkomendasikan karena sifatnya yang sangat praktis dan seringkali karena mencakup berbagai informasi tentang topik Perpustakaan ini. Untuk mendapatkan informasi lebih lanjut tentang setiap buku, cukup klik pada gambar buku. Juga, gelembung informasi mungkin ditampilkan. You can click on the title of the book in that bubble to get more information, too.Section 4. Selecting an Appropriate Design for the Evaluation Why should you choose a design for your evaluation When should you do so Who should be involved in choosing a design How do you select an appropriate design for your evaluation When you hear the word experiment, it may call up pictures of people in long white lab coats peering through microscopes. In reality, an experiment is just trying something out to see how or why or whether it works. It can be as simple as putting a different spice in your favorite dish, or as complex as developing and testing a comprehensive effort to improve child health outcomes in a city or state. Academics and other researchers in public health and the social sciences conduct experiments to understand how environments affect behavior and outcomes, so their experiments usually involve people and aspects of the environment. A new community program or intervention is an experiment, too, one that a governmental or community organization engages in to find out a better way to address a community issue. It usually starts with an assumption about what will work sometimes called a theory of change - but that assumption is no guarantee. Like any experiment, a program or intervention has to be evaluated to see whether it works and under what conditions. In this section, well look at some of the ways you might structure an evaluation to examine whether your program is working, and explore how to choose the one that best meets your needs. These arrangements for discovery are known as experimental (or evaluation) designs. What do we mean by a design for the evaluation Every evaluation is essentially a research or discovery project. Your research may be about determining how effective your program or effort is overall, which parts of it are working well and which need adjusting, or whether some participants respond to certain methods or conditions differently from others. If your results are to be reliable, you have to give the evaluation a structure that will tell you what you want to know. That structure the arrangement of discovery- is the evaluations design. The design depends on what kinds of questions your evaluation is meant to answer. Some of the most common evaluation (research) questions : Does a particular program or intervention whether an instructional or motivational program, improving access and opportunities, or a policy change cause a particular change in participants or others behavior, in physical or social conditions, health or development outcomes, or other indicators of success What component(s) and element(s) of the program or intervention were responsible for the change What are the unintended effects of an intervention, and how did they influence the outcomes If you try a new method or activity, what happens Will the program that worked in another context, or the one that you read about in a professional journal, work in your community, or with your population, or with your issue If you want reliable answers to evaluation questions like these, you have to ask them in a way that will show you whether you actually got results, and whether those results were in fact due to your actions or the circums tances you created, or to other factors. In other words, you have to create a design for your research or evaluation to give you clear answers to your questions. Well discuss how to do that later in the section. Why should you choose a design for your evaluation An evaluation may seem simple: if you can see progress toward your goal by the end of the evaluation period, youre doing OK if you cant, you need to change. Unfortunately, its not that simple at all. First, how do you measure progress Second, if there seems to be none, how do you know what you should change in order to increase your effectiveness Third, if there is progress, how do you know it was caused by ( or contributed to) your program, and not by something else And finally, even if youre doing well, how will you decide what you could do better, and what elements of your program can be changed or eliminated without affecting success A good design for your evaluation will help you answer important questions like these. Some specific reasons for spending the time to design your evaluation carefully include: So your evaluation will be reliable. A good design will give you accurate results. If you design your evaluation well, you can trust it to tell you whether youre actually having an effect, and why. Understanding your program to this extent makes it easier to achieve and maintain success. So you can pinpoint areas you need to work on, as well as those that are successful . A good design can help you understand exactly where the strong and weak points of your program or intervention are, and give you clues as to how they can be further strengthened or changed for the greatest impact. So your results are credible . If your evaluation is designed properly, others will take your results seriously. If a well-designed evaluation shows that your program is effective, youre much more likely to be able to convince others to use similar methods, and to convince funders that your organization is a good investment. So you can identify factors unrelated to what youre doing that have an effect positive or negative on your results and on the lives of participants. Participants histories, crucial local or national events, the passage of time, personal crises, and many other factors can influence the outcome of a program or intervention for better or worse. A good evaluation design can help you to identify these, and either correct for them if you can, or devise methods to deal with or incorporate them. So you can identify unintended consequences (both positive and negative) and correct for them . A good design can show you all of what resulted from your program or intervention, not just what you expected. If you understand that your work has consequences that are negative as well as positive, or that it has more andor different positive consequences than you anticipated, you can adjust accordingly. So youll have a coherent plan and organizing structure for your evaluation . It will be much easier to conduct your evaluation if it has an appropriate design. Youll know better what you need to do in order to get the information you need. Spending the time to choose and organize an evaluation design will pay off in the time you save later and in the quality of the information you get. When should you choose a design for your evaluation Once youve determined your evaluation questions and gathered and organized all the information you can about the issue and ways to approach it, the next step is choosing a design for the evaluation. Ideally, this all takes place at the beginning of the process of putting together a program or intervention. Your evaluation should be an integral part of your program. and its planning should therefore be an integral part of the program planning. Thats the ideal now lets talk about reality. If youre reading this, the chances are probably at least 50-50 that youre connected to an underfunded government agency or to a community-based or non-governmental organization, and that youre planning an evaluation of a program or intervention thats been running for some time months or even years. Even if thats true, the same guidelines apply. Choose your questions, gather information, choose a design, and then go on through the steps presented in this chapter. Evaluation is important enough that you wont really be accomplishing anything by taking shortcuts in planning it. If your program has a cycle, then it probably makes sense to start your evaluation at the beginning of it the beginning of a year or a program phase, where all participants are starting from the same place, or from the beginning of their involvement. If thats not possible if your program has a rolling admissions policy, or provides a service whenever people need it and participants are all at different points, that can sometimes present research problems. You may want to evaluate the programs effects only with new participants, or with another specific group. On the other hand, if your program operates without a particular beginning and end, you may get the best picture of its effectiveness by evaluating it as it is, starting whenever youre ready. Whatever the case, your design should follow your information gathering and synthesis. Who should be involved in choosing a design If youre a regular Tool Box user, and particularly if youve been reading this chapter, you know that the Tool Box team generally recommends a participatory process involving both research and community partners, including all those with an interest in or who are affected with the program in planning and implementation. Choosing a design for evaluation presents somewhat of an exception to this policy, since scientific or evaluation partners may have a much clearer understanding of what is required to conduct research, and of the factors that may interfere with it. As well see in the how-to part of this section, there are a number of considerations that have to be taken into account to gain accurate information that actually tells you what you want to know. Graduate students generally take courses to gain the knowledge they need to conduct research well, and even some veteran researchers have difficulty setting up an appropriate research design. That doesnt mean a community group cant learn to do it, but rather that the time they would have to spend on acquiring background knowledge might be too great. Thus, it makes the most sense to assign this task (or at the very least its coordination) to an individual or small group with experience in research and evaluation design. Such a person can not only help you choose among possible designs, but explain what each design entails, in time, resources, and necessary skills, so that you can judge its appropriateness and feasibility for your context. How do you choose a design for your evaluation How do you go about deciding what kind of research design will best serve the purposes of your evaluation The answer to that question involves an examination of four areas: The nature of the research questions you are trying to answer The challenges to the research, and the ways they can be resolved or reduced The kinds of research designs that are generally used, and what each design entails The possibility of adapting a particular research design to your program or situation what the structure of your program will support, what participants will consent to, and what your resources and time constraints are Well begin this part of the section with an examination of the concerns research designs should address, go on to considering some common designs and how well they address those concerns, and end with some guidelines for choosing a design that will both be possible to implement and give you the information you need about your program. Catatan . in this part of the section, were looking at evaluation as a research project. As a result, well use the term research in many places where we could just as easily have said, for the purposes of this section, evaluation. Research is more general, and some users of this section may be more concerned with research in general than evaluation in particular. Concerns research designs should address The most important consideration in designing a research project except perhaps for the value of the research itself is whether your arrangement will provide you with valid information. If you dont design and set up your research project properly, your findings wont give you information that is accurate and likely to hold true with other situations. In the case of an evaluation, that means that you wont have a basis for adjusting what you do to strengthen and improve it. Heres a far-fetched example that illustrates this point. If you took childrens heights at age six, then fed them large amounts of a specific food for three years say carrots and measured them again at the end of the period, youd probably find that most of them were considerably taller at nine years than at six. You might conclude that it was eating carrots that made the children taller because your research design gave you no basis for comparing these childrens growth to that of other children. There are two kinds of threats to the validity of a piece of research . They are usually referred to as threats to internal validity (whether the intervention produced the change) and threats to external validity (whether the results are likely to apply to other people and situations). Threats to internal validity These are threats (or alternative explanations) to your claim that what you did caused changes in the direction you were aiming for. They are generally posed by factors operating at the same time as your program or intervention that might have an effect on the issue youre trying to address. If you dont have a way of separating their effects from those of your program, you cant tell whether the observed changes were caused by your work, or by one or more of these other factors.Theyre called threats to internal validity because theyre internal to the study they have to do with whether your intervention and not something else accounted for the difference. There are several kinds of threats to internal validity: History. Both participants personal histories their backgrounds, cultures, experiences, education, etc. and external events that occur during the research period a disaster, an election, conflict in the community, a new law may influence whether or not theres any change in the outcomes youre concerned with. Maturation . This refers to the natural physical, psychological, and social processes that take place as time goes by. The growth of the carrot-eating children in the example above is a result of maturation, for instance, as might be a decline in risky behavior as someone passed from adolescence to adulthood, the development of arthritis in older people, or participants becoming tired during learning activities towards the end of the day. The effects of testing or observation on participants . The mere fact of a programs existence, or of their taking part in it, may affect participants behavior or attitudes, as may the experience of being tested, videotaped, or otherwise observed or measured. Changes in measurement . An instrument a blood pressure cuff or a scale, for instance can change over time, or different ones may not give the same results. By the same token, observers those gathering information may change their standards over time, or two or more observers may disagree on the observations. Regression toward the mean . This is a statistical term that refers to the fact that, over time, the very high and very low scores on a measure (a test, for instance) often tend to drift back toward the average for the group. If you start a program with participants who, by definition, have very low or high levels of whatever youre measuring reading skill, exposure to domestic violence, particular behavior toward people of other races or backgrounds, etc. their scores may end up closer to the average over the course of the evaluation period even without any program. The selection of participants . Those who choose participants may slant their selection toward a particular group that is more or less likely to change than a cross-section of the population from which the group was selected. (A good example is that of employment training programs that get paid according to the number of people they place in jobs. Theyre more likely to select participants who already have all or most of the skills they need to become employed, and neglect those who have fewer skills. and who therefore most need the service.) Selection can play a part when participants themselves choose to enroll in a program (self-selection), since those who decide to participate are probably already motivated to make changes. It may also be a matter of chance: members of a particular group may, simply by coincidence, share a characteristic that will set their results on your measures apart from the norm of the population youre drawing from. Selection can also be a problem when two groups being compared are chosen by different standards. Well discuss this further below when we deal with control or comparison groups. The loss of data or participants . If too little information is collected about participants, or if too many drop out well before the research period is over, your results may be based on too little data to be reliable. This also arises when two groups are being compared. If their losses of data or participants are significantly different, comparing them may no longer give you valid information. The nature of change . Often, change isnt steady and even. It can involve leaps forward and leaps backward before it gets to a stable place if it ever does. (Think of looking at the performance of a sports team halfway through the season. No matter what its record is at that moment, you wont know how well it will finish until the season is over.) Your measurements may take place over too short a period or come at the wrong times to track the true course of the change or lack of change thats occurring. A combination of the effects of two or more of these . Two or more of these factors may combine to produce or prevent the changes your program aims to produce. A language-study curriculum that is tested only on students who already speak two or more languages runs into problems with both participants history all the students have experience learning languages other than their own and selection youve chosen students who are very likely to be successful at language learning. Threats to external validity These are factors that affect your ability to apply your research results in other circumstances to increase the chances that your program and its results can be reproduced elsewhere or with other populations. If, for instance, you offer parenting classes only to single mothers, you cant assume, no matter how successful they appear to be, that the same classes will work as well with men. Threats to external validity (or generalizability) may be the result of the interactions of other factors with the program or intervention itself, or may be due to particular conditions of the program. Interaction of testing or data collection and the program or intervention . An initial test or observation might change the way participants react to the program, making a difference in final outcomes. Since you cant assume that another group will have the same reaction or achieve similar final outcomes as a result, external validity or generalizability of the findings becomes questionable. Interaction of selection procedures and the program or intervention . If the participants selected or self-selected are particularly sensitive to the methods or purpose of the program, it cant be assumed to be effective with participants who are less sensitive or ready for the program. Parents whove been threatened by the government with the loss of their children due to child abuse may be more receptive to learning techniques for improving their parenting, for example, than parents who are under no such pressure. The effects of the research arrangements . Participants may change behavior as a result of being observed, or may react to particular individuals in ways they would be unlikely to react to others. A classic example here is that of a famous baboon researcher, Irven DeVore, who after years of observing troupes of baboons, realized that they behaved differently when he was there than when he wasnt. Although his intent was to observe their natural behavior, his presence itself constituted an intervention, making the behavior of the baboons he was observing different from that of a troupe that was not observed. The interference of multiple treatments or interventions . The effects of a particular program can be changed when participants are exposed to it beforehand in a different context, or are exposed to another before or at the same time as the one being evaluated. This may occur when participants are receiving services from different sources, or being treated simultaneously for two or more health issues or other conditions. Given the range of community programs that exist, there are many possibilities here. Adults might be members of a high school completion class while participating in a substance abuse recovery program. A diabetic might be treated with a new drug while at the same time participating in a nutrition and physical activity program to deal with obesity. Sometimes, the sequence of treatments or services in a single program can have the same effect, with one influencing how participants respond to those that follow, even though each treatment is being evaluated separately. Common research designs Many books have been written on the subject of research design. While they contain too much material to summarize here, there are some basic designs that we can introduce. The important differences among them come down to how many measurements youll take, when you will take them, and how many groups of what kind will be involved. Program evaluations generally look for the answers to three basic questions: Was there any change in participants or others behavior, in physical or social conditions, or in outcomes or indicators of success during the evaluation period Was whatever change took place or the lack of change caused by your program, intervention, or effort What, in your program or outside it, actually caused or prevented the change As weve discussed, changes and improvement in outcomes may have been caused by some or all of your intervention, or by external factors. Participants or the communitys history might have been crucial. Participants may have changed as a result of simply getting older and more mature or more experienced in the world often an issue when working with children or adolescents. Environmental factors events, policy change, or conditions in participants lives can often facilitate or prevent change as well. Understanding exactly where the change came from or where the barriers to change reside, gives you the opportunity to adjust your program to take advantage of or combat those factors. If all you had to do was to measure whatever behavior or condition you wanted to influence at the beginning and end of the evaluation, choosing a design would be an easy task. Unfortunately, its not quite that simple there are those nasty threats to validity to worry about. We have to keep them in mind as we look at some common research designs. Research designs, in general, differ in one or both of two ways: the number and timing of the measurements they use and whether they look at single or multiple groups. Well look at single-group designs first, then go on to multiple groups. Before we go any further, it is helpful to have an understanding of some basic research terms that we will be using in our discussion. Researchers usually refer to your first measurement(s) or observation(s) the ones you take before you start your program or intervention as a baseline measure or baseline observation . because it establishes a baseline a known level to which you compare future measurements or observations. Some other important research terms: Independent variables are the program itself andor the methods or conditions that the researcher in this case, you wants to evaluate. Theyre called variables because they can change you might have chosen (and might still choose) other methods. Theyre independent because their existence doesnt depend on whether something else occurs: youve chosen them, and theyll stay consistent throughout the evaluation period. Dependent variables are whatever may or may not change as a result of the presence of the independent variable(s). In an evaluation, your program or intervention is the independent variable. (If youre evaluating a number of different methods or conditions, each of them is an independent variable.) Whatever youre trying to change is the dependent variable. (If youre aiming at change in more than one behavior or outcome, each type of change is a different dependent variable.) Theyre called dependent variables because changes in them depend on the action of the independent variable. or something else. Measures are just that measurements of the dependent variables. They usually refer to procedures that have results that can be translated into numbers, and may take the form of community assessments, observations, surveys, interviews, or tests. They may also count incidents or measure the amount of the dependent variable (number or percentage of children who are overweight or obese, violent crimes per 100,000 population, etc.) Observations might involve measurement, or they might simply record what happens in specific circumstances: the ways in which people use a space, the kinds of interactions children have in a classroom, the character of the interactions during an assessment. For convenience, researchers often use observation to refer to any kind of measurement and well use the same convention here. Pre- and post- single-group design The simplest design is also probably the least accurate and desirable: the pre (before) and post (after) measurement or observation. This consists of simply measuring whatever youre concerned with in one group the infant mortality rate, unemployment, water pollution applying your intervention to that group or community, and then observing again. This type of design assumes that a difference in the two observations will tell you whether there was a change over the period between them, and also assumes that any positive change was caused by the intervention. In most cases, a pre-post design wont tell you much, because it doesnt really address any of the research concerns weve discussed. It doesnt account for the influence of other factors on the dependent variable, and it doesnt tell you anything about trends of change or the progress of change during the evaluation period only where participants were at the beginning and where they were at the end. It can help you determine whether certain kinds of things have happened whether theres been a reduction in the level of educational attainment or the amount of environmental pollution in a river, for instance but it wont tell you why. Despite its limitations, taking measures before and after the intervention is far better than no measures. Even looking at something as seemingly simple to measure pre and post as blood pressure (in a heart disease prevention program) is questionable. Blood pressure may be lower at the final observation than at the initial one, but that tells you nothing about how much it may have gone up and down in between. If the readings were taken by different people, the change may be due in part to differences in their skill, or to how relaxed each was able to make participants feel. Familiarity with the program could also have reduced most participants blood pressure from the pre- to the post-measurement, as could some other factor that wasnt specifically part of the independent variable being evaluated. Interrupted time series design with a single group (simple time series) An interrupted time series used repeated measures before and after delayed implementation of the independent variable (e.g. the program, etc.) to help rule out other explanations. This relatively strong design with comparisons within the group addresses most threats to internal validity. The simplest form of this design is to take repeated observations, implement the program or intervention, and observe a number of times during the evaluation period, including at the end. This method is a great improvement over the pre- and post- design in that it tracks the trend of change, and can therefore, help see whether it was actually the independent variable that caused any change. It can also help to identify the influence of external factors such as when the dependent variable shows significant change before the intervention is implemented. Another possibility for this design is to implement more than one independent variable, either by trying two or more, one after another (often with a break in between), or by adding each to what came before.This gives a picture not only of the progress of change, but can show very clearly what causes change. That gives an evaluator the opportunity not only to adjust the program, but to drop elements that have no effect. There are a number of variations on the interrupted time series theme, including varying the observation times implementing the independent variable repeatedly and implementing one independent variable, then another, then both together to evaluate their interaction. In any variety of interrupted time series design, its important to know what youre looking for. In an evaluation of a traffic fatality control program in the United Kingdom that focused on reducing drunk driving, monthly measurements seemed to show only a small decline in fatal accidents. When the statistics for weekends, when there were most likely to be drunk drivers on the road, were separated out, however, they showed that the weekend fatality rate dropped sharply with the implementation of the program, and stayed low thereafter. Had the researchers not realized that that might be the case, the program might have been stopped, and the weekend accident rate would not have been reduced. Interrupted time series design with multiple groups (multiple baselinetime series) This has the same possibilities as the single time series design, with the added wrinkle of using repeated measures with one or more other groups (so-called multiple baselines). By using multiple baselines (groups), the external validity or generality of the findings is enhanced we can see if the effects occur with different groups or under different conditions. This multiple time series design typically staggered introduction of the intervention with different groups or communities gives the researcher more opportunities : You can try a method or program with two or more groups from the same You can try a particular method or program with different populations, to see if its effective with others You can vary the timing or intensity of an intervention with different groups You can test different interventions at the same time You can try the same two or more interventions with each of two groups, but reverse their order to see if sequencing it makes any difference Again, there are more variations possible here. Control group design A common way to evaluate the effects of an independent variable is to use a control group. This group is usually similar to the participant group, but either receives no intervention at all, or receives a different intervention with the same goal as that offered to the participant group. A control group design is usually the most difficult to set up you have to find appropriate groups, observe both on a regular basis, etc. but is generally considered to be the most reliable. The term control group comes from the attempt to control outside and other influences on the dependent variable. If everything about the two groups except their exposure to the program being evaluated averages out to be the same, then any differences in results must be due to that exposure. The term comparison group is more modest it typically offers a community watched for similar levels of the problemgoal and relevant characteristics of the community or population (e.g. education, poverty). The gold standard here is the randomized control group, one that is selected totally at random, either from among the population the program or intervention is concerned with those at risk for heart disease, unemployed males, young parents or, if appropriate, the population at large. A random group eliminates the problems of selection we discussed above, as well as issues that might arise from differences in culture, race, or other factors. A control group thats carefully chosen will have the same characteristics as the intervention group (the focus of the evaluation). If, for instance, the two groups come from the same pool of people with a particular health condition, and are chosen at random either to be treated in the conventional way or to try a new approach, it can be assumed that since they were chosen at random from the same population both groups will be subject, on average, to the same outside influences, and will have the same diversity of backgrounds. Thus, if there is a significant difference in their results, it is fairly safe to assume that the difference comes from the independent variable the type of intervention, and not something else. The difficulty for governmental and community-based organizations is to find or create a randomized control group. If the program has a long waiting list, it may be able to create a control by selecting those to first receive the intervention at random. That in itself creates problems, in that people often drop off waiting lists out of frustration or other reasons. Being included in the evaluation may help to keep them, on the other hand, by giving them a closer connection to the program and making them feel valued. An ESOL (English as a Second or Other Language) program in Boston with a three-year waiting list addressed the problem by offering those on the waiting list a different option. They received videotapes to use at home, along with biweekly tutoring by advanced students and graduates of the program. Thus, they became a comparison group with a somewhat different intervention that, as expected, was less effective than the program itself, but was more effective than none, and kept them on the waiting list. It also gave them a head start once they got into the classes, with many starting at a middle rather than at a beginning level. When theres no waiting list or similar group to draw from, community organizations often end up using a comparison group - one composed of participants in another place or program and whose members characteristics, backgrounds, and experience may or may not be similar to those of the participant group. That circumstance can raise some of the same problems related to selection seen when there is no control group. If the only potential comparisons involve very different groups, it may be better to use a design, such as an interrupted time series design that doesnt involve a control group at all, where the comparison is within (not between) groups. Groups may look similar, but may differ in an important way. Two groups of participants in a substance abuse intervention program, for instance, may have similar histories, but if one program is voluntary and the other is not, the results arent likely to be comparable. One group will probably be more motivated and less resentful than the other, and composed of people who already know they have a potential problem. The motivation and determination of their participants, rather than the effectiveness of the two programs, may influence the amount of change observed. This issue may come up in a single-group design as well. A program that may, on average, seem to be relatively ineffective may prove, on close inspection, to be quite effective with certain participants those of a specific educational background, for instance, or with particular life experiences. Looking at results with this in mind can be an important part of an evaluation, and give you valuable and usable information. Choosing a design This sections discussion of research designs is in no way complete. Its meant to provide an introduction to whats available. There are literally thousands of books and articles written on this topic, and youll probably want more information. There are a number of statistical methods that can compensate for less-than-perfect designs, for instance: few community groups have the resources to assemble a randomized control group, or to implement two or more similar programs to see which works better. Given this, the material that follows is meant only as broad guidelines. We dont attempt to be specific about what kind of design you need in what circumstances, but only try to suggest some things to think about in different situations. Help is available from a number of directions: Much can be found on the Internet (see the Resources part of this section for a few sites) there are numerous books and articles (the classic text on research design is also cited in Resources) and universities are a great resource, both through their libraries and through faculty and graduate students who might be interested in what youre doing, and be willing to help with your evaluation. Use any and all of these to find what will work best for you. Funders may also be willing either to provide technical assistance for evaluations, or to include money in your grant or contract specifically to pay for a professional evaluation. Your goal in evaluating your effort is to get the most reliable and accurate information possible, given your evaluation questions, the nature of your program, what your participants will consent to, your time constraints, and your resources. The important thing here is not to set up a perfect research study, but to design your evaluation to get real information, and to be able to separate the effects of external factors from the effects of your program. So how do you go about choosing the best design that will be workable for you The steps are in the first sentence of this paragraph. Consider your evaluation questions What do you need to know If the intent of your evaluation is simply to see whether something specific happened, its possible that a simple pre-post design will do. If, as is more likely, you want to know both whether change has occurred, and if it has, whether it has in fact been caused by your program, youll need a design that helps to screen out the effects of external influences and participants backgrounds. For many community programs, a control or comparison group is helpful, but not absolutely necessary. Think carefully about the frequency and timing of your observations and the amount of different kinds of information you can collect. With repeated measures, you can get you quite an accurate picture of the effectiveness of your program from a simple time series design. Single group interrupted time series designs, which are often the most workable for small organizations, can give you a very reliable evaluation if theyre structured well. That generally means obtaining multiple baseline observations (enough to set a trend) before the program begins observing often and documenting your observations carefully (often with both quantitative expressed in numbers and qualitative expressed in records of incidents and of what participants did and said data) and including during intervention and follow-up observations to see whether effects are maintained. In many of these situations, a multiple-group interrupted time series design is quite possible, but of a naturally-occurring experiment. If your program includes two or more groups or classes, each working toward the same goals, you have the opportunity to stagger the introduction of the intervention across the groups. This comparison with (and across) groups allows you to screen out such factors as the facilitators ability and community influences (assuming all participants come from the same general population.) You could also try different methods or time sequences, to see which works best. In some cases, the real question is not whether your method or program works, but whether it works better than other methods or programs you could be using. Teaching a skill for instance, employment training, parenting, diabetes management, conflict resolution often falls into this category. Here, you need a comparison of some sort. While evaluations of some of these medical treatment, for example may require a control group, others can be compared to data from the field, to published results of other programs, or, by using community-level indicators. from measurements in other communities. There are community programs where the bottom line is very simple. If youre working to control water pollution, your main concern may be the amount of pollution coming out of effluent pipes, or the amount found in the river. Your only measure of success may be keeping pollution below a certain level, which means that regular monitoring of water quality is the only evaluation you need. There are probably relatively few community programs where evaluation is this easy you might, for instance, want to know which of your pollution-control activities is most effective but if yours is one, a simple design may be all you need. Consider the nature of your program What does your program look like, and what is it meant to do Does it work with participants in groups, or individually, for instance Does it run in cycles classes or workshops that begin and end on certain dates, or a time-limited program that participants go through only once Or can participants enter whenever they are ready and stay until they reach their goals How much of the work of the program is dependent on staff, and how much do participants do on their own How important is the program context the way staff, participants, and others treat one another, the general philosophy of the program, the physical setting, the organizational culture (The culture of an organization consists of accepted and traditional ways of doing things, patterns of relationships, how people dress, how they act toward and communicate with one another, etc.) If you work with participants in groups, a multiple-group design either interrupted time series or control grou p might be easier to use. If you work with participants individually, perhaps a simple time series or a single group design would be appropriate. If your program is time-limited either one-time-only, or with sessions that follow one another youll want a design that fits into the schedule, and that can give you reliable results in the time you have. One possibility is to use a multiple group design, with groups following one another session by session. The program for each group might be adjusted, based on the results for the group before, so that you could test new ideas each session. If your program has no clear beginning and end, youre more likely to need a single group design that considers participants individually, or by the level of their baseline performance. You may also have to compensate for the fact that participants may be entering the program at different levels, or with different goals. A proverb says that you never step in the same river twice, because the water that flows past a fixed point is always changing. The same is true of most community programs. Someone coming into a program at a particular time may have a totally different experience than a similar person entering at a different time, even though the operation of the program is the same for both. A particular participant may encourage everyone around her, and create an overwhelmingly positive atmosphere different from that experienced by participants who enter the program after she has left, for example. Its very difficult to control for this kind of difference over time, but its important to be aware that it can, and often does, exist, and may affect the results of a program evaluation. If the organizational or program context and culture are important, then youll probably want to compare your results with participants to those in a control group in a similar situation where those factors are different, or are ignored. There is, of course, a huge range of possibilities here: nearly any design can be adapted to nearly any situation in the right circumstances. This material is meant only to give you a sense of how to start thinking about the issue of design for an evaluation. Consider what your participants (and staff) will consent to In addition to the effect that it might have on the results of your evaluation, you might find that a lot of observation can raise protests from participants who feel their privacy is threatened, or from already-overworked staff members who see adding evaluation to their job as just another burden. You may be able to overcome these obstacles, or you may have to compromise fewer or different kinds of observations, a less intrusive design in order to be able to conduct the evaluation at all. There are other reasons that participants might object to observation, or at least intense observation. Potential for embarrassment, a desire for secrecy (to keep their participation in the program from family members or others), even self-protection (in the case of domestic violence, for instance) can contribute to unwillingness to be a participant in the evaluation. Staff members may have some of the same concerns. There are ways to deal with these issues, but theres no guarantee that theyll work. One is to inform participants at the beginning about exactly what youre hoping to do, listen to their objections, and meet with them (more than once, if necessary) to come up with a satisfactory approach. Staff members are less likely to complain if theyre involved in planning the evaluation, and thus have some say over the frequency and nature of observations. The same is true for participants.Treating everyones concerns seriously and including them in the planning process can go a long way toward assuring cooperation. Consider your time constraints As we mentioned above, the important thing here is to choose a design that will give you reasonably reliable information. In general, your design doesnt have to be perfect, but it does have to be good enough to give you a reasonably good indication that changes are actually taking place, and that they are the result of your program. Just how precise you can be is at least partially controlled by the limits on your time placed by funding, program considerations, and other factors. Time constraints may also be imposed. Some of the most common: Program structure . An evaluation may make the most sense if its conducted to correspond with a regular program cycle. Funding . If you are funded only for a pilot project, for example, youll have to conduct your evaluation within the time span of the funding, and soon enough to show that your program is successful enough to be refunded. A time schedule for evaluation may be part of your grant or contract, especially if the funder is paying for it. Participants schedules . A rural education program may need to stop for several months a year to allow participants to plant and tend crops, for instance. The seriousness of the issue A delay in understanding whether a violence prevention program is effective may cost lives. The availability of professional evaluators . Perhaps the evaluation team can only work during a particular time frame. Consider your resources Strategic planners often advise that groups and organizations consider resources last: otherwise theyll reject many good ideas because theyre too expensive or difficult, rather than trying to find ways to make them work with the resources at hand. Resources include not only money, but also space, materials and equipment, personnel, and skills and expertise. Often, one of these can substitute for another: a staff person with experience in research can take the place of money that would be used to pay a consultant, for example. A partnership with a nearby university could get you not only expertise, but perhaps needed equipment as well. The lesson here is to begin by determining the best design possible for your purposes, without regard to resources. You may have to settle for somewhat less, but if you start by aiming for what you want, youre likely to get a lot closer to it than if you assume you cant possibly get it. In Summary The way you design your evaluation research will have a lot to do with how accurate and reliable your results are, and how well you can use them to improve your program or intervention. The design should be one that best addresses key threats to internal validity (whether the intervention caused the change) and external validity (the ability to generalize your results to other situations, communities, and populations). Common research designs such as interrupted time series or control group designs can be adapted to various situations, and combined in various ways to create a design that is both appropriate and feasible for your program. It may be necessary to seek help from a consultant, a university partner, or simply someone with research experience to help identify a design that fits your needs. A good design will address your evaluation questions, and take into consideration the nature of your program, what program participants and staff will agree to, your time constraints, and the resources you have available for evaluation. It often makes sense to consider resources last, so that you wont reject good ideas because they seem too expensive or difficult. Once youve chosen a design, you can often find a way around a lack of resources to make it a reality.Kirkpatricks Four-Level Training Evaluation Model Evaluate the effectiveness of your training at four levels. If you deliver training for your team or your organization, then you probably know how important it is to measure its effectiveness. After all, you dont want to spend time or money on training that doesnt provide a good return. This is where Kirkpatricks Four-Level Training Evaluation Model can help you objectively analyze the effectiveness and impact of your training, so that you can improve it in the future. In this article, well look at each of the four levels of the Kirkpatrick model, and well examine how you can apply the model to evaluate training. Well also look at some of the situations where it may not be useful. The Four Levels Donald Kirkpatrick, Professor Emeritus at the University of Wisconsin and past president of the American Society for Training and Development (ASTD), first published his Four-Level Training Evaluation Model in 1959, in the US Training and Development Journal. The model was then updated in 1975, and again in 1994, when he published his best-known work, Evaluating Training Programs. The four levels are: Lets look at each level in greater detail. Level 1: Reaction This level measures how your trainees (the people being trained), reacted to the training. Obviously, you want them to feel that the training was a valuable experience, and you want them to feel good about the instructor, the topic, the material, its presentation, and the venue. Its important to measure reaction, because it helps you understand how well the training was received by your audience. It also helps you improve the training for future trainees, including identifying important areas or topics that are missing from the training. Level 2: Learning At level 2, you measure what your trainees have learned. How much has their knowledge increased as a result of the training When you planned the training session, you hopefully started with a list of specific learning objectives: these should be the starting point for your measurement. Keep in mind that you can measure learning in different ways depending on these objectives, and depending on whether youre interested in changes to knowledge, skills, or attitude. Its important to measure this, because knowing what your trainees are learning and what they arent will help you improve future training. Level 3: Behavior At this level, you evaluate how far your trainees have changed their behavior, based on the training they received. Specifically, this looks at how trainees apply the information. Its important to realize that behavior can only change if conditions are favorable. For instance, imagine youve skipped measurement at the first two Kirkpatrick levels and, when looking at your groups behavior, you determine that no behavior change has taken place. Therefore, you assume that your trainees havent learned anything and that the training was ineffective. However, just because behavior hasnt changed, it doesnt mean that trainees havent learned anything. Perhaps their boss wont let them apply new knowledge. Or, maybe theyve learned everything you taught, but they have no desire to apply the knowledge themselves. Level 4: Results At this level, you analyze the final results of your training. This includes outcomes that you or your organization have determined to be good for business, good for the employees, or good for the bottom line. Reprinted with permission of Berrett-Koehler Publishers, Inc. San Francisco, CA. From Evaluating Training Programs. copy 1996 by Donald L.Kirkpatrick amp James D Kirkpatrick. Seluruh hak cipta. bkconnection Make sure that you plan your training effectively. Use our articles on Training Needs Assessment . Gagnes Nine Levels of Learning and 4MAT to help you do this. How to Apply the Model Level 1: Reaction Start by identifying how youll measure reaction. Consider addressing these questions: Did the trainees feel that the training was worth their time Did they think that it was successful What were the biggest strengths of the training, and the biggest weaknesses Did they like the venue and presentation style Did the training session accommodate their personal learning styles Next, identify how you want to measure these reactions. To do this youll typically use employee satisfaction surveys or questionnaires however you can also watch trainees body language during the training, and get verbal feedback by asking trainees directly about their experience. Once youve gathered this information, look at it carefully. Then, think about what changes you could make, based on your trainees feedback and suggestions. Level 2: Learning To measure learning, start by identifying what you want to evaluate. (These things could be changes in knowledge, skills, or attitudes.) Its often helpful to measure these areas both before and after training. So, before training commences, test your trainees to determine their knowledge, skill levels, and attitudes. Once training is finished, test your trainees a second time to measure what they have learned, or measure learning with interviews or verbal assessments. Level 3: Behavior It can be challenging to measure behavior effectively. This is a longer-term activity that should take place weeks or months after the initial training. Consider these questions: Did the trainees put any of their learning to use Are trainees able to teach their new knowledge, skills, or attitudes to other people Are trainees aware that theyve changed their behavior One of the best ways to measure behavior is to conduct observations and interviews over time. Also, keep in mind that behavior will only change if conditions are favorable. For instance, effective learning could have taken place in the training session. But, if the overall organizational culture isnt set up for any behavior changes, the trainees might not be able to apply what theyve learned. Finding This Article Useful You can learn another 285 team management skills, like this, by joining the Mind Tools Club. Alternatively, trainees might not receive support, recognition, or reward for their behavior change from their boss. So, over time, they disregard the skills or knowledge that they have learned, and go back to their old behaviors. Level 4: Results Of all the levels, measuring the final results of the training is likely to be the most costly and time consuming. The biggest challenges are identifying which outcomes, benefits, or final results are most closely linked to the training, and coming up with an effective way to measure these outcomes over the long term. Here are some outcomes to consider, depending on the objectives of your training: Increased employee retention. Increased production. Higher morale. Reduced waste. Increased sales. Higher quality ratings. Increased customer satisfaction. Fewer staff complaints. Considerations Although Kirkpatricks Four-Level Training Evaluation Model is popular and widely used, there are a number of considerations that need to be taken into account when using the model. One issue is that it can be time-consuming and expensive to use levels 3 or 4 of the model, so its not practical for all organizations and situations. This is especially the case for organizations that dont have a dedicated training or human resource department, or for one-off training sessions or programs. In a similar way, it can be expensive and resource intensive to wire up an organization to collect data with the sole purpose of evaluating training at levels 3 and 4. (Whether or not this is practical depends on the systems already in place within the organization.) The model also assumes that each levels importance is greater than the last level, and that all levels are linked. For instance, it implies that Reaction is less important, ultimately, than Results, and that reactions must be positive for learning to take place. In practice, this may not be the case. Most importantly, organizations change in many ways, and behaviors and results change depending on these, as well as on training. For example, measurable improvements in areas like retention and productivity could result from the arrival of a new boss or from a new computer system, rather than from training. Kirkpatricks model is great for trying to evaluate training in a scientific way, however, so many variables can be changing in fast-changing organizations that analysis at level 4 can be limited in usefulness. Key Points The Kirkpatrick Four-Level Training Evaluation Model helps trainers to measure the effectiveness of their training in an objective way. The model was originally created by Donald Kirkpatrick in 1959, and has since gone through several updates and revisions. The Four-Levels are as follows: By going through and analyzing each of these four levels, you can gain a thorough understanding of how effective your training was, and how you can improve in the future. Bear in mind that the model isnt practical in all situations, and that measuring the effectiveness of training with the model can be time-consuming and use a lot of resources. Situs ini mengajarkan keterampilan yang Anda butuhkan untuk karir yang bahagia dan sukses dan ini hanyalah salah satu dari banyak alat dan sumber daya yang akan Anda temukan di Mind Tools. Berlangganan newsletter gratis kami. or join the Mind Tools Club and really supercharge your career
Daftar-of-uk-forex-broker
Bagaimana-untuk-membuat-uang-dengan-forex-robot