tag:blogger.com,1999:blog-18223796.post1516000045844187169..comments2024-03-27T07:30:49.184+01:00Comments on The learning & technology blog: Proving versus improvingUnknownnoreply@blogger.comBlogger2125tag:blogger.com,1999:blog-18223796.post-58309867158376523932008-06-21T19:03:00.000+02:002008-06-21T19:03:00.000+02:00When there is human input involved, there is seldo...When there is human input involved, there is seldom good/correct/just/balance etc evaluation. <BR/><BR/>On the other hand, I am most interested to know what administrators (perhaps there is a better term) do with the evaluation input??? A good evaluator should not stop at just the input from the learners, but to take in other issues that affect and involved the results of the evaluations. Almost always we have learners evaluate the trainers, how often do we ask the trainers to evaluate the learners? How often do we have assessment that include, for example, environmental factors that might affect the learning process such as bad lighting, room temperatures etc. etc. Or human interaction such as personality crushes, races or sexes conflicts etc. etc.?hoonghttps://www.blogger.com/profile/07420717810111331458noreply@blogger.comtag:blogger.com,1999:blog-18223796.post-14747364884628509132008-06-20T19:41:00.000+02:002008-06-20T19:41:00.000+02:00Hi, very recognizable. I first had the experience ...Hi, very recognizable. I first had the experience at 'the other side', in Nigeria, where all learning was blocked by the focus on accountability.<BR/>Now I am working on evaluations with many partner organisations. We have distinguished 2 types: 1) evaluations (=learning exercises, whereby we monitor learning and improvement after the evaluation)and 2) results audits, which focus on accountability and are done in sensitive situations.<BR/><BR/>still I have to spend a lot of time on making the learning focus of the first type clear.Anonymoushttps://www.blogger.com/profile/09549678304842559402noreply@blogger.com