Thank you for visiting the Education section of the World web site.

This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

Wednesday, January 7, 2015

Presentations of Web-Based Assessment of Student Learning Applications

Mission

To coordinate and institutionalize student learning assessment efforts through the integration and implementation of the Evaluation of the Student Learning Plan of the University of Puerto Rico, Río Piedras Campus.

To provide support services to coordinators of the assessment program about the selection of suitable quantitative and qualitative instruments and the disclosure of assessment results within the campus.

Presentations of Web-Based Assessment of Student Learning Applications

The OEAE is always striving to help professors in the analysis of student learning assessment data. In collaboration with Dr. Carlos Corrada of the Computer Science program the idea of creating a web-based application for analyzing assessment data came about.

At the beginning of the 2012-2013 second semester the OEAE was established as the client for which two groups of students of the Development of  Web-Based Applications (MATE 4996) course had to develop the tailor-made application. The OEAE needed an application with a user-friendly interface to input and analyze data in order to streamline the process of assessment of student learning.

On May 14 the students form the Development of  Web-Based Applications (MATE 4996) course presented their applications for the assessment of student learning to the OEAE staff and the assessment coordinators from various Colleges of the campus.

The following are screenshots of the home screen of both applications:

Group 1:



Group 2:

During the first semester of the 2013-2014 academic year some professors have volunteered to use the application as part of their courses. Once their recommendations have been submitted and discussed, we expect to update some features after the end of this trial period. If institutional support is provided, the use of the application will be expanded to more users across our campus.

We couldn't be happier with the work that the students did. We are confident that this initiative will help professors to analyze student learning assessment data, thus aiding the OEAE to have a real-time snapshot of the achievement of institutional student-centered learning outcomes.
Share:

Google Add-on Tidbits

There is an easy to use security checkup allows you to see what devices have been logged in using your account as well as what applications are using your account credentials. Follow the step by step instructions and see if your account is as secure as you think it is. This is a great new tool to check all your Google email accounts. https://security.google.com/settings/security/secureaccount

  Ever wanted to mail merge from a spreadsheet in Google Sheets?  How about sending an email based on Google Form responses?  If so, this Google Drive Add-on is for you!

FormMule is one way to perform unique mail merges from your spreadsheets. With the ability to set up 15 different email templates. Check out the link to video showing how it works.  http://youtu.be/KhxmvoBUC68.

If you already make heavy use of Google Translate, this add-on can save a lot of time. Instead of having to open Google Translate in a new browser window, you can just highlight some text and translate it from inside the document. On the downside, the add-on currently supports only five languages—English, French, German, Japanese, and Spanish. You can find the source code here: https://developers.google.com/apps-script/quickstart/docs.

Need to print out form letters with people’s names and other personalized details? Check out DocumentMerge, which lets you generate multiple Google Docs based on personal information in a corresponding Google Sheet. This add-on includes a helpful wizard to guide you through the process.

Do not like FormMule than try Yet Another Mail Merge. Much like DocumentMerge, it lets you generate form letters from a spreadsheet. But while DocumentMerge is for printing, Yet Another Mail Merge is for emailing. After you have created  a message in Gmail and inserted some syntax, use the add-on in Google Sheets to select the message and mail it out. The add-on itself isn’t intuitive, but the store listing has a straightforward walkthrough. Yet Another Mail Merge lets you send up to 99 messages per day for free, which should be just fine for personal use.

AbleBits Suite, which is actually five separate add-ons, but together they give you more editing power. Remove Duplicates scans and highlights duplicate cells and provides an option to remove them. Advanced Find & Replace lets you search across spreadsheets and much more. Split Names separates values in a single cell into individual cells, and Merge Values combines multiple cell values into one cell.  Find Fuzzy Matches looks for spelling variations on a given word search.

Mapping Sheets takes a list of addresses and plots them onto a Google Map. You can filter the map data by category, as specified on the spreadsheet. It may not seem like a useful tool, but all kinds of potential uses come to mind, from allowing online students a visualization as to where their classmates are located, to all the places one might want to visit.

If you have your students use Google for papers, you will for sure want to have them add-on EasyBib. MLA, APA and Chicago Style are available. This add-on allows you to insert citations directly into Google Documents directly within the Document.

If your up on educational technology, I am sure you have heard about mind mapping and the importance of visualization to some learners. Mindmeister lets you take bulleted lists and convert it into a mindmap for a graphical depiction. This would be a fascinating way to convert a table of contents or outline for a paper into something easier to read. I’d really like it to go the other way and let students create a mind map and convert to a traditional outline, it is a very cool tool that will be useful for education. The Mindmeister Google Drive add-on gives a powerful punch to organizing your writing.

Like how Microsoft provides a Table of Content tool in Microsoft Word? The good news is Google Drive can also insert a Table of Contents inside the document. The Table of Contents add-on puts it in the sidebar. You can use it in Google Docs to create the scripts and plans for an online presentation and the table of contents side bar will make the document much easier to navigate.Remember just like in Microsoft Word, for it to work, you have to make things as Heading 1, Heading 2, etc.

If you have dyslexia like I do or just are a bad proof reader, you may want to check out the Consistency Checker add-on. This one is very useful for those long documents or other documents that have to be consistent. This add-on provides an extra check for spelling and also looks to make sure numbers were handled properly, hyphenation and other types of writing mechanics were used in a consistent way. For college students writing project documents together, this is a great tool.

Do you use Storify? If so the Twetter Curator Google Doc add-on could be a way to pull in tweets from your class Twitter account or another source as you annotate and discuss them. The purpose of Kaizena is to help teachers give better feedback to students. The teacher just pulls the document into Kaizena with one click and easily add voice comments and thoughts on student work.

So you want to include clip art and not have to worry so much about legal usage issues? The Open Clip Art add-on has 50,000 thousand pieces of clip art. It is nice that these graphics include icons so it is easier to navigate to other sites by making buttons. Music teachers will want to check out Vextab Music Notation.
 Just like Microsoft Office, you can use Google to create relaxing Sudoku. The Google Sheets add-on Sudoku Puzzle can generate puzzles at four difficulty levels and helps you create your own. You can also check your answers from within the sheet or insert the solution in a separate grid.

If you know of a great Google add-on that is free and please share it with me.
Share:

Should big data analytics be used in conjunction with opinion surveys in Education?

In a world filled with data and most companies starting to realize the possibilities of what can be done with big data analytics. Why is higher education and others still solely making decisions on "client opinion surveys"? Why not at least support client survey results with big data analytics?

Webopedia defines big data analytics as "the process of collecting, organizing and analyzing large sets of data ("big data") to discover patterns and other useful information. Not only will big data analytics help you to understand the information contained within the data, but it will also help identify the data that is most important to the business and future business decisions." According to the SAS Institute Inc "big data analytics is the process of examining big data to uncover hidden patterns, unknown correlations and other useful information that can be used to make better decisions. With big data analytics, data scientists and others can analyze huge volumes of data that conventional analytics and business intelligence solutions can't touch". According to Margaret Rouse (2012) big data can show true "customer preferences" and that one of the goals to using big data is " to help companies make more informed business decisions".  TerraData states that when big data is done correctly "it is the coming together of business and IT to produce results that differentiate, that power you forward and reduce costs. Big Data is less about the size of the data and more about the ability to handle lots of different data types and the application of powerful analytics techniques" (2014). This means "smarter decisions cut costs, improve productivity, enhance customer experience and provide any organization with a competitive advantage" (TerraData).

So why isn't everyone using big data? Rouse (2012) suggest that it is besause they have "a lack of internal analytics skills and the high cost of hiring experienced analytics professionals" who know tools like Hadoop, Pig, Spark, MapReduce, Hive and YARN. ThoughtWorks Inc. point out that companies need to shift their thinking from the actual data to insight and impact thinking and trying to address unanswered questions. Schmarzo acknowledges that educational institutions are interested in using big data for showing ways to "improve student performance and raise teacher/professor effectiveness, while reducing administrative workload" and to compare one institution to another, but no mention of us on the business side of the house or to learn current LMS usage to compare against a possible replacement. van Rijmenam's infographic shows the benefits on learning, but still no mention of using it for software changes. Fleisher, explains that some institutions are not using it because they have a concern that acknowledging that they recording all learning activities and releasing results may harm students if this data got into the wrong hands.  Guthrie points out that big data in respect to education needs to go"beyond online learning, administrators" need to  "understand that big data can be used in admissions, budgeting and student services to ensure transparency, better distribution of resources and identification of at-risk students." (2013). Perhaps one could classify technology application purchases as a student service, but I do not think that is what Guthrie is referring to.

Coursera was the one place that mentions the use of big data in education for more than learning. Their course description says includes the statement: "to drive intervention and improvement in educational software and systems". So way aren't leaders doing software comparison, including LMS reviews required to learn big data techniques? I think it is because the top academic administrators are afraid they would find out that some of their decisions based solely on "pilot survey results" were made based on inaccurate data.

For example, Lets assume a institution was currently trying to decide between two LMSs, "The pilot consisted of 11 courses and 162 students. With 39 students, 5 faculty and 1 TA responding to a survey, when asked whether LMS2 or LMS1 was better for teaching and learning the results were":


LMS2    30/4567%(Faculty only 5/7)
LMS1  4/459%(Faculty only 0/7)
Same  5/4511%(Faculty only 1/7)
n/a - unsure 6/4513%(TA only 1/7)

Additional Notes: that there were only ll courses for this single semester to use LMS2, out of a total of 2,094 courses. Only 162 students were included in the LMS2 test, out of the total 3,991 students enrolled and only 5 faculty and 1 TA was included in respect to the 780+ faculty on payroll.

At first glance, the 67%  sticks out and some may say that is a strong indicator that an institution needs to switch to LMS2 because only 33% wanted to stay with LMS1 or were not sure LMS2 had an increase benefit to change. But that 67% is a percentage based on those that responded to a survey not the number that want to switch. The table says out of "7" faculty yet in the text the person stated that only 5 faculty and 1 TA responded, and the last I check 5+1 is 6 not 7. If you take the total number of participants compared to the number of surveys completed, the 67% is really only based on approximately 27% of those who participated in the pilot. The student population is only represented by ~0.04% and the faculty population by ~0.007%.  What about Staff or business entities that use LMS1, they were not represented at all in these results. Other questions that come to mind and decision makers should be asking are: (1) did the faculty who's courses were included actively uses LMS1 to the fullest?, (2) Were the faculty included tech savvy?, (3) Did the included faculty have a personal issue with LMS1?, (4) What actual course included? Were they freshman courses or senior level courses?, (5) what is more important ease of use for faculty or better learning engagement options for students?, (6) Had participants been properly shown how to use LMS1 as they were LMS2?, and (7) What were the features of LMS2 used compared to the used features of LMS1?

I this basic example shows that survey results alone allow for skewed reporting, but add big data analytics to opinion surveys and education decision makers would have a more realistic picture and better decisions for most important stake holder, the student. Garber provides other examples how people are spinning survey results to get their way. In his examples he talks about how some people cherry-picked a statistic describing just a small percentage of a population to make things look better than they are and decision makers need to ask "What did the rest think?" (Garber). In a 2012 paper talk about the need to develop an approach to detect research interviewer falsification of survey data. But that the detection approach was not limited to interviewers and could be applied to basic survey analyst. Robert Oak points out that falsification of figures is more common place in his article about the New York Post claim of falsified unemployment figures.  Johnson, Parker, & Clements stated in their research "Likewise, satisfaction that little or no data falsification has been detected previously should not serve as an excuse for failure to continually apply careful quality control standards to all survey operations" (2001). Fanelli's 2009 research showed that "scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct by any standard– and up to 33.7% admitted other questionable research practices. In surveys asking about the behavior of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices" which would make one think that there is a prevalence of researcher misconduct or did Fanelli mislead us with these results?

Schmarzo states "In a world where education holds the greatest potential to drive quality-of-life improvements, there are countless opportunities for educational institutions to collaborate and raise the fortunes of students, teachers, and society as a whole" (2014) by using big data along with old fashion surveys. The benefits of big data can be felt by all organizations.


Resources:
Share:

How to bury your academic writing

Inappropriate use of journal impact factors has been much in the spotlight. The impact factor is not only a poor indicator of research quality but it is also blamed for delaying publication of good science, and even encouraging dishonesty.  My own experience is in line with this: some of my most highly-cited work has appeared in relatively humble journals. In the age of the internet, there are three things that determine if a paper gets noticed: it needs to be tagged so that it will be found on a computer search, it needs to be accessible and not locked behind a paywall, and it needs to  be well-written and interesting.
While I'm not a slave to metrics, I am, like all academics these days, fascinated by the citation data provided by sources such as Google Scholar, and pleased when I see that something I have written has been cited by others. The other side of the coin is the depression that ensues when  I find that a paper into which I have distilled my deepest wisdom has been ignored by the world. Often, it's hard to say why one article is popular and another is not. The papers I'm proudest of tend to be those that required the greatest intellectual effort, but these are seldom the most cited. Typically, they are the more technical or mathematical articles; others find them as hard to read as I found them to write.  Google Scholar reveals, however, one factor that exerts a massive impact on whether a paper is cited or not: whether it appears in a journal or an edited book.
I've had my suspicions about this for some time, and it has made me very reluctant to write book chapters. This can be difficult. Quite often, a chapter for the proceedings is the price one is expected to pay for an expenses-paid invitation to a conference. And many of my friends and colleagues get overtaken by enthusiasm for editing a book and are keen for me to write something. But statistical analysis of citation data confirms my misgivings.
Google Scholar is surprisingly coy in terms of what it allows you to download. It will show you citations of your papers on the screen, but I have not found a way to download these data.  (I'm a recent convert to data-scraping in R, but you get a firm rap over the knuckles for improper behaviour if you attempt to use this approach to probe Google Scholar too closely). So in what follows I treated rank order of citations, rather than absolute citation level as my dependent variable. I downloaded a listing of my papers, ranked by citations, and coded them according to whether the article appeared in a journal or as a book chapter. Book chapters tend not to be empirical – they are more often review papers, or conceptual pieces - so to control for that I subdivided the journal articles into empirical and theoretical/review pieces. I also excluded papers published after 2007, to allow for the fact that recent papers haven't had a chance to get cited much, as well as any odd items such as book reviews. To make interpretation more intuitive, I inverted the rank order, so that a high score meant lots of citations, and the boxplots showing the results are in the Figure below.
Citation rank by Publication type. High rank indicates more citations. There is no significant difference between journal reviews and empirical papers, both of which have significantly higher citation rank than book chapters (p < .001)
Because I'm nerdy about these things, I did some stats, but you don't really need them. The trend is very clear in the boxplot: book chapters don't get cited. Well, you might say, maybe this is because they aren't so good; after all, book chapters aren't usually peer reviewed. It could be true, but I doubt it. My own appraisal is that these chapters contain some of my best writing, because they allowed me to think about broader theoretical issues and integrate ideas from different perspectives in a way that is not so easy in an empirical article. Perhaps, then, it's because these papers are theoretical  that they aren't cited. But no: look at the non-empirical pieces published in journals. Their citation level is just as high as papers reporting empirical data. Could publication year play a part? As mentioned above, I excluded papers from the past five years;  after doing this, there was no overall correlation between citation level and publication year.

Things may be different for other disciplines, especially in humanities, where publication in books is much more common. But if you publish in a field where most publications are in journals, then I suspect the trend I see in my own work will apply to you too. Quite simply, if you write a chapter for an edited book, you might as well write the paper and then bury it in a hole in the ground.

Accessibility is the problem. However good  your chapter is, if readers don't have access to the book, they won't find it. In the past, there was at least a faint hope that they may happen upon the book in a library, but these days, most of us don't bother with any articles that we can't download from the internet. 
I'm curious as to whether publishers have any plans to tackle this issue. Are they still producing edited collections? I still get asked to contribute to these from time to time, but perhaps not so often as in the past. An obvious solution would be to put edited books online, just like journals, but there would need to be a radical rethink of access costs if so. Nobody is going to want to pay $30 to download a single chapter. Maybe publishers could make book chapters freely available one or two years after publication  - I see no purpose in locking this material away from the public, and it seems unlikely this would damage book sales. If publishers don't want to be responsible for putting material online, they could simply return copyright to authors, who would be free to do so.

My own solution would be for editors of such collections to take matters into their own hands, bypass publishers altogether, and produce freely downloadable, web-based copy. But until that happens, my advice to any academic who is tempted to write a chapter for an edited collection is don't.

Reference
Eve Mardera, Helmut Kettenmann, & Sten Grillner (2010). Impacting our young Proceedings of the National Academy of Sciences, 107 DOI: 10.1073/pnas.1016516107
Share:

You want to be a research assistant? Advice for psychologists


©CartoonStock.com
The dire state of the academic jobs market was brought home to me recently. I’d advertised for someone to act as a graduate research assistant/co-ordinator. This kind of post is a good choice for a junior person who wants to gain experience before applying for clinical or educational psychology training, or while considering whether to do a doctorate.  Normally I get around 30-40 applicants for this kind of job. This time it was 123.  This, apparently, is nothing. These days, for psychology assistant jobs, which act as a gateway to oversubscribed clinical psychology doctorate programmes,  the number of applicants can run into the hundreds.
One thing that strikes me is how little insight many applicants have into what happens to their job application. I hope that this post, explaining the process from the employer's perspective, might help aspiring job-seekers improve their chances of getting to interview.
With over 120 applications to process, if I allowed only two minutes for each application, it’d take me four hours to shortlist. Of course, that’s not how it works. There has to be an initial triage procedure where the selection panel views the applications looking for reasons not to shortlist. We were able to exclude around ¾ of the applications on the basis of a fairly brief scan. But we then had to select a shortlist of five from the remainder. This is done on the basis of a careful re-reading of those applications that survive triage.
So how do you get past this double hurdle and avoid initial triage, and then make it to the shortlist? Well, here are some tips. They seem very obvious and simple, but worth stating, as many of the applications we received didn’t seem aware of them.
  • Follow the instructions for job applicants, and read the further particulars. I gather that there are some careers advisors who recommend candidates should send their application direct to the principal investigator, rather than via administration, because it will get noticed. It will indeed, but it will create the impression that you are incapable of reading instructions.
  • Specify how you meet the selection criteria. Our university bends over backwards to operate a fair and transparent recruitment policy. We need to be able to demonstrate that our decisions are based on the selection criteria in the job advert, and not on some idiosyncratic prejudice. The ideal applicant lists the selection criteria in the same order that they appear in the job description and briefly explains how they meet them. It makes the job of the selection panel much, much easier, and they will give you credit for being both intelligent and considerate.
  • Don’t apply if you don’t meet the essential selection criteria. So, if the job requires you to drive, then don’t apply if you don’t have a driving licence (or a chauffeur).  When I was young and naïve, I assumed people wouldn’t apply for a job if they didn’t meet the criteria, and ended up appointing a non-driver to a job that involved travelling to remote locations with heavy equipment. It is not a mistake I’ll make again.
  • Don’t assume anything is obvious. To continue with the example above, if the job involves driving and you don’t mention that you can drive, the person evaluating your application won’t know whether you’ve forgotten to tell them, or if you are avoiding mentioning this because you can’t drive. Either way, it’s bad news for your application, and in the current market, it’ll go on the ‘no’ pile.
  • Don’t send a standard application that is appropriate for any job. It’s key to include a cover letter or personal statement that indicates that you have read the further particulars for this specific post. Use Google to find out more about the post/employer. On the other hand, the employer really doesn’t want or need to be told about the subject matter of the research - once I had the equivalent of a short undergraduate essay, complete with references, included in an application, and though it demonstrated keeness, it was complete overkill.
  • Read through your application before you submit it. I’ve had applicants who describe how enthusiastic they are about the prospect of working, not in my institution, but in another university. I’ve had applications where entire paragraphs were duplicated. A melange of fonts changing mid-paragraph, or even mid-sentence, creates a poor impression.
  • Run the cover letter/personal statement through a spell checker, and check the English. Anyone working for me will be sending letters and information sheets out to the general public on my behalf. It creates a bad impression if there are errors, and so you’ve a very high chance of getting on the ‘no’ pile if you make mistakes on an important document like a job application.
  • Be honest. If there’s something unusual about your application, explain it. I have, for instance, shortlisted a person who’d had a prolonged period of sick leave, but who gave a clear and honest explanation of the situation and was able to offer reassurance about ability to do the job.
  • Be concise, but not too concise. The cover letter/personal statement should cover all the selection criteria, but avoid wordiness. One to two single-spaced pages is about right.
And if you get to interview? Well, this blog post has some useful hints:
But what if you follow all my advice and still fail to get to interview? Alas, given the massive mismatch between the number of bright, talented people and the number of jobs on offer, many good candidates are bound to miss out. It certainly doesn’t mean you are unemployable. But try this exercise: look at the selection criteria and your application, and pretend you are the employer, not the candidate: An employer with a huge stack of applications and limited time. What do you think looks good, and what are the weaker points? Can you gain further experience so that the weaker points can be remedied in future job applications? Or maybe the weaknesses include something like a poor degree class, which can’t be fixed. Perhaps your specific set of talents and interests just aren’t a good fit to this kind of job, in which case you need to consider other options.  
If all else fails, you may want to cheer yourself up by reflecting on how people who don’t go along with the system can nevertheless have interesting and influential lives, by reading  Hunter S. Thompson's 1958 job application to the Vancouver Sun 
Share:

Some thoughts on use of metrics in university research assessment

The UK’s Research Excellence Framework (REF) is like a walrus: it is huge, cumbersome and has a very long gestation period. Most universities started preparing in earnest for the REF early in 2011, with submissions being made late in 2013. Results will be announced in late December, just in time to cheer up our seasonal festivities.
 
Like many others, I have moaned about the costs of the REF: not just in money, but also the time spent by university staff, who could be more cheerfully and productively engaged in academic activities. The walrus needs feeding copious amounts of data: research outputs must be carefully selected and then graded in terms of research quality. Over the summer, those dedicated souls who sit on REF panels were required to read and evaluate several hundred papers. Come December, the walrus digestive system will have condensed the concerted ponderings of some of the best academic minds in the UK into a handful of rankings.

But is there a viable alternative? Last week I attended a fascinating workshop on the use of metrics in research. I had earlier submitted comments to an independent review of the role of metrics in research assessment from the Higher Education Funding Council for England (HEFCE), arguing that we need to consider cost-effectiveness when developing assessment methods. The current systems of evaluation have grown ever more complex and expensive, without anyone considering whether the associated improvements justified the increasing costs. My view is that an evaluation system need not be perfect – it just needs to be ‘good enough’ to provide a basis for disbursement of funds that can be seen to be both transparent and fair, and which does not lend itself readily to gaming.

Is there an alternative?
When I started preparing my presentation, I had intended to talk just about the use of measures of citations to rank departments, using analysis done for an earlier blogpost, as well as results from this paper by Mryglod et al. Both sources indicated that, at least in sciences, the ultimate quality-related research (QR) funding allocation for a department was highly correlated with a department-based measure of citations. So I planned to make the case that if we used a citation-based metric (which can be computed by a single person in a few hours) we could achieve much the same result as the full REF process for evaluating outputs, which takes many months and involves hundreds of people.
However, in pondering the data, I then realised that there was an even better predictor of QR funding per department: simply the number of staff entered into the REF process.

Before presenting the analysis, I need to backtrack to just explain the measures I am using, as this can get quite confusing. HEFCE deserves an accolade for its website, where all the relevant data can be found. My analyses were based on the 2008 Research Assessment Exercise (RAE).  In what follows I used a file called QR funding and research volume broken down by institution and subject, which is downloadable here. This contains details of funding for each institution and subject for 2009-2010. I am sure the calculations I present here have been done much better by others and I hope they will not by shy to inform me if there are mistakes in my working.

The variables of interest are:
  • The percentages of research falling in each star band in the RAE. From this, one can compute an average quality rating, by multiplying 4* by 7, 3* by 3, and 2* by 1 and adding these, and dividing the total by 100. Note that this figure is independent of department size and can be treated as an estimate of the average quality of a researcher in that department and subject.
  • The number of full-time equivalent research-active staff entered for the RAE. This is labelled as the ‘model volume number’, but I will call it Nstaff. (In fact, the numbers given in the 2009-2010 spreadsheet are slightly different from those used in the computation, for reasons I am not clear about, but I have used the correct numbers, i.e. those in HEFCE tables from RAE2008).
  • The departmental quality rating: this is average quality rating x Nstaff. (Labelled as “model quality-weighted volume” in the file). This is summed across all departments in a discipline to give a total subject quality rating (labelled as “total quality-weighted volume for whole unit of assessment”).
  • The overall funds available for the subject are listed as “Model total QR quanta for whole unit of assessment (£)”. I have not been able to establish how this number is derived, but I assume it has to do with the size and cost of the subject, and the amount of funding available from government.
  • QR (quality-related) funding is then derived by dividing the departmental quality rating by the total subject quality rating and multiplying by overall funds. This gives the sum of QR money allocated by HEFCE to that department for that year, which in 2009 ranged from just over £2K (Coventry University, Psychology) to over £12 million (UCL, Hospital-based clinical subjects). The total QR allocation in 2009-2010 for all disciplines was just over £1 billion.
  • The departmental H-index is taken from my previous blogpost. It is derived by doing a Web of Knowledge search for articles from the departmental address, and then computing the H-index in the usual way. Note that this does not involve identifying individual scientists.
Readers who are still with me may have noticed that we'd expect QR funding for a subject to be correlated with Nstaff, because Nstaff features in the formula for computing QR funding. And this makes sense, because departments with more research staff require greater levels of funding. A key question is just how much difference does it make to the QR allocation if one includes the quality ratings from the RAE in the formula.

Size-related funding
To check this out, I computed an alternative metric, size-related funding, which multiplies the overall funds by the proportion of Nstaff in the department relative to total staff in that subject across all departments. So if across all departments in the subject there are 100 staff, a department with 10 staff would get .1 of the overall funds for the subject.

Table 1 shows: the correlation between Nstaff and QR funding (r QR/Nstaff) and how much a department would typically gain or lose if size-related funding were adopted, expressing the absolute difference as a percentage of QR funding (± % diff).

Table 1: Mean number of staff and QR funding by subject, with correlation between QR and N staff, and mean difference between QR funding and size-related funding






Mean Mean r QR/ ± %
Subject Nstaff QR £K Nstaff diff
Cardiovascular Medicine 26.3 794 0.906 23
Cancer Studies 38.1 1,330 0.939 13
Infection and Immunology 43.7 1,506 0.971 22
Other Hospital Based Clinical Subjects 58.2 1,945 0.986 23
Other Laboratory Based Clinical Subjects 21.8 685 0.952 41
Epidemiology and Public Health 26.6 949 0.986 25
Health Services Research 21.9 659 0.900 24
Primary Care & Community Based Clinical  10.4 370 0.790 29
Psychiatry, Neuroscience & Clinical Psychology 46.7 1,402 0.987 15
Dentistry 31.1 1,146 0.977 13
Nursing and Midwifery 18.0 487 0.930 32
Allied Health Professions and Studies 20.4 424 0.884 36
Pharmacy 27.5 899 0.936 24
Biological Sciences 45.1 1,649 0.978 19
Pre-clinical and Human Biological Sciences 49.4 1,944 0.887 18
Agriculture, Veterinary and Food Science 33.2 999 0.976 21
Earth Systems and Environmental Sciences 28.6 1,128 0.971 14
Chemistry 37.9 1,461 0.969 18
Physics 44.0 1,596 0.994 8
Pure Mathematics 18.4 489 0.957 24
Applied Mathematics 20.0 614 0.988 19
Statistics and Operational Research 12.6 406 0.953 19
Computer Science and Informatics 22.9 769 0.954 26
Electrical and Electronic Engineering 23.8 892 0.982 17
General Engineering; Mineral/Mining Engineering 28.9 1,073 0.958 30
Chemical Engineering 26.6 1,162 0.968 15
Civil Engineering 23.2 1,005 0.960 19
Mech., Aeronautical, Manufacturing Engineering 35.7 1,370 0.987 14
Metallurgy and Materials 21.1 807 0.948 24
Architecture and the Built Environment 18.7 436 0.961 23
Town and Country Planning 15.1 306 0.911 27
Geography and Environmental Studies 22.8 505 0.969 21
Archaeology 20.7 518 0.990 12
Economics and Econometrics 25.7 581 0.968 20
Accounting and Finance 11.7 156 0.982 19
Business and Management Studies 38.7 630 0.964 27
Library and Information Management 16.3 244 0.935 26
Law 26.6 426 0.960 30
Politics and International Studies 22.4 333 0.955 31
Social Work and Social Policy & Administration 19.1 324 0.944 26
Sociology 24.1 404 0.933 24
Anthropology 18.6 363 0.946 12
Development Studies 21.7 368 0.936 25
Psychology 21.1 424 0.919 35
Education 21.0 346 0.983 34
Sports-Related Studies 13.5 231 0.952 37
American Studies and Anglophone Area Studies 10.9 191 0.988 11
Middle Eastern and African Studies 17.7 393 0.978 17
Asian Studies 15.9 258 0.938 26
European Studies 20.1 253 0.787 30
Russian, Slavonic and East European Languages 8.7 138 0.973 22
French 12.6 195 0.979 16
German, Dutch and Scandinavian Languages 8.4 129 0.966 17
Italian 6.3 111 0.865 20
Iberian and Latin American Languages 9.1 156 0.937 17
Celtic Studies 0.0 328

English Language and Literature 20.9 374 0.982 26
Linguistics 11.7 168 0.956 18
Classics, Ancient History, Byzantine and Modern Greek Studies 19.4 364 0.992 22
Philosophy 14.4 258 0.987 23
Theology, Divinity and Religious Studies 11.4 174 0.958 32
History 20.8 366 0.988 21
Art and Design 22.7 419 0.955 37
History of Art, Architecture and Design 10.7 213 0.960 18
Drama, Dance and Performing Arts 9.8 221 0.864 36
Communication, Cultural and Media Studies 11.9 195 0.860 29
Music 10.6 259 0.863 33

Correlations between Nstaff and QR funding are very high –above .9. Nevertheless, this analysis shows that, as is evident in Table 1, if we substituted size-related funding for QR funding, the amounts gained or lost by individual departments can be substantial.  In some subjects, though, mainly in the Humanities, where overall QR allocations are anyhow quite modest, the difference between size-related and QR funding is not large in absolute terms. In such cases, it might be rational to allocate funds solely by Nstaff and ignore quality ratings.  The advantage would be an enormous saving in time – one could bypass the RAE or REF entirely. This might be a reasonable option if the amount of expenditure on the RAE/REF by the department exceeds any potential gain from inclusion of quality ratings.

Is the departmental H-index useful?
If we assume that the goal is to have a system that approximates the outcomes of the RAE (and I’ll come back to that later) then for most subjects you need something more than Nstaff. The issue then is whether an easily computed department-based metric such as the H-index or total citations could add further predictive power. I looked at the figures for two subjects where I had computed the departmental H-index: Psychology and Physics.  As it happens, Physics is an extreme case: the correlation between Nstaff and QR funding was .994. Adding an H-index does not improve prediction because there is virtually no variance left to explain. As can be seen from Table 1, Physics is a case where use of size-related funding might be justified, given that the difference between size-related and QR funding averages out at only 8%.

For Psychology, adding the H-index to the regression explains a small but significant 6.2% of additional variance, with the correlation increasing to .95.

But how much difference would it make in practice if we were to use these readily available measures to award funding instead of the RAE formula? The answer is more than you might think, and this is because the range in award size is so very large that even a small departure from perfect prediction can translate into a lot of money.

Table 2 shows the different levels of funding that departments would accrue depending on how the funding formula is computed. The full table is too large and complex to show here, so I'll just show every 8th institution. As well as comparing alternative size-related and H-index-based (QRH) metrics with the RAE funding formula (QR0137), I have looked at how things change if the funding formula is tweaked: either to give more linear weighting to the different star categories (QR1234), or to give more extreme reward for the highest 4* category (QR0039) – something which is rumoured to be a preferred method for REF2014. In addition, I have devised a metric that has some parallels with the RAE metric, based on the residual of the H-index after removing effect of departmental size. This could be used as an index of quality that is independent of size; it correlates with r = .87 with the RAE average quality rating. To get an alternative QR estimate, it was substituted for the average quality rating in the funding formula to give the Size.Hres measure.

Table 2: Funding results in £K from different metrics for seven Psychology departments representing different levels of QR funding


institution QR0137 Size-related QR1234 QR0039 QRH Size.Hres
A 1891 1138 1424 2247 1416 1470
B 812 585 683 899 698 655
C 655 702 688 620 578 576
D 405 363 401 400 499 422
E 191 323 276 121 279 304
F 78 192 140 44 299 218
G 26 161 81 13 60 142

To avoid invidious comparisons, I have not labelled the departments, though anyone who is curious about their identity could discover them quite readily.  The two columns that use the H-index tend to give similar results, and are closer to a QR funding based that treats the four star ratings as equal points on a scale (QR1234). It is also apparent that a move to QR0039 (where most reward is given for 4* research and none for 1* or 2*) will increase the share of funds to those institutions who are already doing well, and decrease it for those who already have poorer income under the current system. One can also see that some of the Universities at the lower end of the table – all of them post 1992 universities – seem disadvantaged by the RAE metric, in that the funding they received seems low relative to both their size and the H-index.

The quest for a fair solution
So what is a fair solution? Here, of course, lies the problem. There is no gold standard. There has been a lot of discussion about whether we should use metrics, but much less discussion of what we are hoping to achieve with a funding allocation.

How about the idea that we could allocate funds simply on the basis of the number of research-active staff? In a straw poll I’ve taken, two concerns are paramount.

First, there is a widely held view that we should give maximum rewards to those with highest quality research, because this will help them maintain their high standing, and incentivise others to do well. This is coupled with a view that we should not be rewarding those who don’t perform. But how extreme do we want this concentration of funding to be? I’ve expressed concerns before that too much concentration in a few elite institutions is not good for UK academia, and that we should be thinking about helping middle-ranking institution become elite, rather than focusing all our attention on those who have already achieved that status. The calculations from RAE in Table 2 show how a tweaking of the funding formula to give higher weighting to 4* research will take money from the poorer institutions and give it to the richer ones: it would be good to see some discussion of the rationale for this approach.

The second source of worry is the potential for gaming. What is to stop a department from entering all their staff, or boosting numbers by taking on extra staff? The first point could be dealt with by having objective criteria for inclusion, such as some minimal number of first- or last-authored publications in the reporting period.  The second strategy would be a risky one, since the institution would have to provide salaries and facilities for the additional staff, and this would only be cost-effective if the QR allocation would cover it. Of course, a really cynical gaming strategy would be to hire people briefly for the REF and then fire them once it is over. However, if funding were simply a function of number of research-active staff, it would be easy to do an assessment annually, to deter such short-term strategies.

How about the departmental H-index? I have shown that it not only is a fairly good predictor of RAE QR funding outcomes on its own, incorporating as it does both aspects of departmental size and research quality, but it also correlates with the RAE measure of quality, once the effect of departmental size is adjusted for. This is all the more impressive when one notes that the departmental H-index is based on any articles listed as coming from the departmental address, whereas the quality rating is based just on those articles submitted to the RAE.

There are well-rehearsed objections to the use of citation metrics such as the H-index: first any citation-based measure is useless for very recent articles. Second, citations vary from discipline to discipline, and in my own subject, Psychology, within sub-disciplines.. Furthermore, the H-index can be gamed to some extent by self-citation, or scientific cliques, and one way of boosting it is to insist on having your name on any publication you are remotely connected with - though the latter strategy is more likely to work for the H-index of the individual than for the H-index of the department. It is easy to find anecdotal instances of poor articles that are highly cited and good articles that are neglected.  Nevertheless, it may be a ‘good enough’ measure when used in aggregate: not to judge individuals but to gauge the scientific influence of work coming from a given department over a period of a few years.

The quest for a perfect measure of quality
I doubt that either of these ‘quick and dirty’ indices will be adopted for future funding allocations, because it’s clear that most academics hate the idea of anything so simple. One message frequently voiced at the Sussex meeting was that quality is far too complex to be reduced to a single number.  While I agree with that sentiment, I am concerned that in our attempts to get a perfect assessment method, we are developing systems that are ever more complex and time-consuming. The initial rationale for the RAE was that we needed a fair and transparent means of allocating funding after the 1992 shake-up of the system created many new universities. Over the years, there has been mission creep, and the purpose of the RAE has been taken over by the idea that we can and should measure quality, feeding an obsession with league tables and competition. My quest for something simpler is not because I think quality is simple, but rather because I think we should use the REF just as a means to allocate funds. If that is our goal, we should not reject simple metrics just because we find them oversimplistic: we should base our decisions on evidence and go for whatever achieves an acceptable outcome at reasonable cost. If a citation-based metric can do that job, then we should consider using it unless we can demonstrate that something else works better.

I'd be very grateful for comments and corrections.
Reference  
Mryglod, O., Kenna, R., Holovatch, Y., & Berche, B. (2013). Comparison of a citation-based indicator and peer review for absolute and specific measures of research-group excellence Scientometrics, 97 (3), 767-777 DOI: 10.1007/s11192-013-1058-9
Share:

E-Learning for Current Generations

In recent years I have been working on two major concepts:
first, the connectivist theory of online learning, which views learning as a
network process; and second, the massive open online course, or MOOC, which is
an instantiation of that process. These, however, represent only the most
recent of what can be seen as a series of 'generations' of e-learning. In this
talk I describe these generations and discuss how they led to, and are a part
of, the most recent work in online learning.

The theme I would like to explore today concerns the growth
and development of our idea of online learning, or as it is sometimes called,
e-learning. What I would like to do is to describe a series of 'generations' of
technologies and approaches that have characterized the development of online
learning over the years. These generations of have informed the shape of online
learning as it exists today, and will help us understand something of the
direction it will take in the future.
These generations span more than a 20-year period. Indeed,
there may even be described a 'generation zero' that predates even my own
involvement in online learning. This generation is characterized by systems such
as Plato, and represents the very idea of placing learning content online. This
includes not only text but also images, audio, video and animations. It also
represents, to a degree, the idea of programmed learning. This is the idea that
computers can present us with content and activities in a sequence determined
by our choices and by the results of online interactions, such as tests and
quizzes. We have never wandered far from this foundational idea, not even in
the 21st century. And it continues to be the point of departure for all
subsequent developments in the field of online learning.
For me, 'generation one' consists of the idea of the
network itself. My first work in the field of online learning was to set up a
bulletin board system, called Athabaska BBS, in order to allow students from
across the province to communicate with me online. It was also the time I first
began using email, the time I began using the Usenet bulletin Board system, and
the time I first began using online information systems such as Gopher. The
process of connecting was involved and complex, requiring the use of modems and
special software.

As generation one developed, generation zero matured. The
personal computer became a tool anyone could use to create and store their own
content. Commercial software came into existence, including both operating
systems and application programs such as spreadsheets, word processors, and
database tools. Content could be created in novel ways - the 'mail merge'
program, for example, would allow you to print the same letter multiple times,
but each with a different name and address drawn from a database.

The next generation takes place in the early 1990s and is
essentially the application of computer games to online learning. These games
were in the first instance text-based and very simple. But they brought with
them some radical changes to the idea of learning itself.

One key development was the idea that multiple people could
occupy the same online 'space' and communicate and interact with each other.
This development coincided with the creation of IRC - inter-relay chat - and
meant that you were in real time communication with multiple people around the
world. But more: the gaming environment meant you could do things with other
people - explore terrain, solve puzzles, even fight with them.

Another key idea was the design of the gaming space itself.
Early computer games (and many early arcade games) were designed like
programmed learning: they were like a flow chart, guiding you through a series
of choices to a predetermined conclusion. But the online games were much more
open-ended. Players interacted with the environment, but the outcome was not
predetermined. At first it was created by chance, as in the rolling of dice in
a Dungeons and Dragons game. But eventually every game state was unique, and it
was no longer possible to memorize the correct sequence of steps to a
successful outcome.

The third element was the technology developed to enable
that which we today call object oriented programming. This changed the nature
of a computer program from a single entity that processed data to a collection
of independent entities - objects - that interacted with each other: they could
send messages to each other to prompt responses, one could be 'contained' in
another, or one could be 'part' of another. So a game player would be an
object, a monster would be an object, they would be contained in a 'room' that
was also an object, and gameplay consisted of the interactions of these objects
with each other in an unplanned open-ended way.

During the development of this second generation we saw the
consolidation of computer-based software and content, and the commercialization
of the network itself. The many brands we saw in the 80s - Atari, Amiga, Tandy,
IBM, and many more - coalesced into the now familiar Mac-PC divide. A few major
software developers emerged, companies like Microsoft and Corel. Computers
became mainstream, and became important business (and learning) tools.

Meanwhile, the world of networks began to commercialize.
Commercial bulletin board services emerged, such as Prodigy, AOL, GEnie and
Compuserv. And the first local internet service providers came into being.
Networking became the way important people connected, and communities like the
WELL began to define a new generation of thought leaders.

You can begin to see a pattern developing here. Through the
first three generations, a familiar process of innovation occurs: first the
development and piloting of the technology (which is also when the open source
community springs up around it), then the commercialization of the technology,
then the consolidation of that commercial market as large players eliminate
weaker competitors.

The next generation sees the development of the content
management system, and in learning, the learning management system.

Both of these are applications developed in order to apply
the functionality developed in generation zero - content production and
management - to the platform developed in generation one - the world wide web.
The first content management systems were exactly like mail merge, except
instead of printing out the content, they delivered it to the remote user
(inside a computer program, the commands are exactly the same - 'print' is used
to print data to a page, print data to a file, or print data to the network).

Early learning management systems were very easy to define.
They consisted of a set of documents which could be merged with a list of
registered users for delivery. They also supported some of the major functions
of networks: bulletin boards, where these users could post messages to each
other, chat rooms, where they could occupy the same online space together, and
online quizzes and activities, where they could interact with the documents and
other resources.

It is interesting to me to reflect that the major debates
about online learning around this time centered on whether online learning
would be mostly about online content - that is, reflective of generation zero -
or mostly about online interaction - that is, reflective of generation one. I
remember some teachers in Manitoba swearing by the interaction model, and using
a bulletin-board style application called FirstClass - eschewing to more
content-based approach I was favouring at the time.

Learning management systems drew a great deal from distance
learning. Indeed, online was (and is still) seen as nothing more than a special
type of distance learning. As such, they favoured a content-based approached,
with interaction following secondarily. And a very standard model emerged:
present objectives, present content, discuss, test. More advanced systems
attempted to replicate the programmed learning paradigm. The Holy Grail of the
day was adaptive learning - a system which would test you (or pretest you) to
determine your skill level, then deliver content and activities appropriate to
that level.

Despite its now-apparent shortcomings, the learning
management system brought some important developments to the field.

First, they brought the idea that learning content could be
modularized, or 'chunked'. This enabled a more fine-grained presentation of
learning content than traditional sources such as textbooks and university
courses. Shorter-form learning content is almost ubiquitous today.



Second, it created the idea that these content modules or
chunks were sharable. The idea that books or courses could be broken down into
smaller chunks suggested to people that these chunks could be created in one
context and reused in another context.

And third, they brought together the idea of communication
and content in the same online environment. The learning management system
became a place where these smaller content objects could be presented, and then
discussed by groups of people either in a discussion board or in a live chat.

These were the core elements of learning management
technology, and a generation of online learning research and development
centered around how content should be created, managed and discussed in online
learning environments. People discussed whether this form of learning could be
equal to classroom learning, they discussed the methodology for producing these
chunks, and they discussed the nature, role and importance of inline
interaction.

Around this time as well an ambitious program began in an
effort to apply some of the generation two principles to learning management
systems (and to content management in general). We came to know this effort
under the heading of 'learning objects'. In Canada we had something called the
East-West project, which was an attempt to standardize learning resources. The
United States developed IMS, and eventually SCORM. Most of the work focused on
the development of metadata, to support discoverability and sharing, but the
core of the program was an attempt to introduce second generation technology -
interactive objects - to learning and content management.

But it didn't take hold. To this day, the learning
management system is designed essentially to present content and support
discussion and activities around that content. We can understand why when we
look at the development of the previous generations of online learning.

By the time learning management systems were developed,
operating systems and application programs, along with the content they
supported, were enterprise software. Corporations and institutions supported
massive centralized distributions. An entire college or university would
standardize on, say, Windows 3.1 (and very few on anything else). 'Content'
became synonymous with 'documents' and these documents - not something fuzzy
like 'objects' - were what would be created and published and shared.

The network was by this time well into the process of
becoming consolidated. Completely gone was the system of individual bulletin
board services; everything now belonged to one giant network. Telecoms and
large service providers such as AOL were coming to dominate access. The
internet standardized around a document presentation format - HTML - and was
defined in terms of websites and pages, constituting essentially a simplified
version of the content produced by enterprise software. The same vendors that
sold these tools - companies like Microsoft and Adobe - sold web production and
viewing tools.

Probably the most interesting developments of all at the
time were happening outside the LMS environment entirely. The tools used to
support online gaming were by this time becoming commercialized. It is worth
mentioning a few of these. New forms of games were being developed and entire
genres - strategy games, for example, sports games, and first-person shooters -
became widely popular.

Though gaming remained a largely offline activity, online
environments were also beginning to develop. One of the first 3D multi-user
environments, for example, was Alpha Worlds. This was followed by Second Life,
which for a while was widely popular. Online gaming communities also became
popular, such as the chess, backgammon and card playing sites set up by Yahoo.
And of course I would be remiss if I didn't mention online gambling sites.

As I mentioned, these developments took place outside the
LMS market. The best efforts of developers to incorporate aspects of gaming -
from object oriented learning design to simulations and gaming environments to
multi-user interactions - were of limited utility in learning management
systems. LMSs were firmly entrenched in the world of content production, and to
a lesser extent the world of networked communication.

This leads us next to the fourth generation, paradoxically
called web 2.0 - and in the field of online learning, e-learning 2.0.

The core ideas of web 2.0 almost defy description in
previous terminology. But two major phenomena describe web 2.0 - first, the
rise of social networks, and second, the creation of content and services that
can interact with those networks. Web 2.0 is sometimes described as the 'web as
a platform' but it is probably more accurate to see it as networking being
applied to data (or perhaps data being applied to networking).

The core technology of web 2.0 is social software. We are
most familiar with social software through brand names like Friendster,
MySpace, Twitter, Linked In, Facebook, and most recently, Google+. But if we
think for a moment about what social software is, it is essentially the
migration of some of your personal data - like your mailing list - to a content
management system on the web. These systems then leverage that data to create
networks. So you can now do things online - like send the same message to many
friends - that you could previously only do with specialized applications.

E-learning 2.0 is the same idea applied to e-learning
content. I am widely regarded as one of the developers of e-learning 2.0, but
this is only because I recognized that a major objective of such technologies
as learning objects and SCORM was to treat learning resources as data. The idea
was that each individual would have available online the same sort of content
authoring and distribution capabilities previously available only to major
publishers. And these would be provided online.

E-learning 2.0 brings several important developments to the
table.
First, it brings in the idea of the social graph, which is
essentially the list of people you send content to, and the list of people who
send you content, and everyone else's list, all in one big table. The social
graph defines a massive communications network in which people, rather than
computers, are the interconnected nodes.

Second, it brings in the idea of personal publishing. The
beginning of web 2.0 is arguably the development of blogging software, which
allowed people to easily create web content for the first time. But it's also
Twitter, which made creating microcontent even easier, and YouTube, which
allowed people to publish videos, and MySpace, which did the same for music,
and Facebook and Flickr, which did the same for photos.
Third, it brings in the idea of interoperability, first in
the form of syndication formats such as RSS, which allow us to share our
content easily with each other, but also later in the form or application
programming interfaces, which allow one computer program on one website to
communicate with another program on another website. These allow you to use one
application - your social network platform, for example - to use another
application - play a game, edit content, or talk to each other.

And fourth, it brings us the idea of platform-independence.
Web 2.0 is as much about mobile computing as it is about social software. It is
as much about using your telephone to post status updates or upload photos as
it is about putting your phonebook on a website. Maybe even more so.

What made web 2.0 possible? In a certain sense, it was the
maturation of generation 0, web content and applications. After being
developed, commercialized and consolidated, these became enterprise services.
But as enterprises became global, these two become global, and emerged out of
the enterprise to become cloud and mobile contents and applications.

Some of the major social networking sites are actually
cloud storage sites - YouTube and Flickr are the most obvious examples. Some
are less obvious, but become so when you think about it - Wikipedia, for
example. Other cloud storage sites operate behind the scenes, like Internet
Archive and Amazon Web Services. And there are cloud services, like Akamai,
that never reach the mainstream perception.

These cloud services developed as a result of enterprise
networking. On the research side, high-speed backbones such as Internet 2 in
the U.S. and CA*Net 3 in Canada virtually eliminated network lag even for large
data files, audio and video. Similar capacities were being developed for lease
by the commercial sector. And the now-consolidated consumer market now began to
support always-on broadband capacity through ASDL or cable internet services.

The consolidation of core gaming technologies took place
largely behind the scenes. This era sees the ascendance of object-oriented
coding languages such as Java and dot Net. The open-ended online environment
led to massive multiplayer online games such as Eve and World of Warcraft. In
learning we see the emergence of major simulation developers such as CAE and
conferencing systems such as Connect, Elluminate, and Cisco. These have become
dominant in the delivery of online seminars and classes.

Content management services, meanwhile, were increasingly
commercialized. We saw the emergence of Blackboard and WebCT, and on the
commercial side products like Saba and Docent. Google purchased Blogger, Yahoo
purchased Flickr, and even the world of open source systems came to be
dominated by quasi-commercial enterprises. Innovators moved on and began to try
radical new technologies like RSS and AJAX, Twitter and Technorati. Today we
think of social networking in terms of the giants, but when it started in the
mid-2000s the technology was uncertain and evolving. In education, probably the
major player from this era was Elgg, at that time and still to this day a novel
technology.

Today, of course, social networking is ubiquitous. The
major technologies have been commercialized and are moving rapidly toward
commodification and enterprise adoption. The ubiquity of social networking came
about as a result of the commercialization of content management services. A
new business model has emerged in which providers sell information about their
users to marketing agencies. The proliferation of social networking sites has
now been reduced to a few major competitors, notably YouTube, Facebook and
Twitter. The providers of search and document management services - Yahoo,
Microsoft, Apple and Google - have their own social networks, but these are
also-rans. Hence when people speak of 'social network learning' they often mean
'using Facebook to support learning' or some such thing.

This is the beginning of the sixth generation, a generation
characterized by commercialized web 2.0 services, a consolidation of the
CMS/LMS market, the development of enterprise conferencing and simulation
technology, cloud networking and - at last - open content and open operating
systems.

Now before the Linux advocates lynch me, let me say that,
yes, there have always been open operating systems. But - frankly - until
recently they have always been the domain of innovators, enthusiasts and hobbyists.
Not mainstream - not, say, running underlying major commercial brands, the way
Linux now underlies Apple's OSX, and not widely used, say, the way Android
powers a large percentage of mobile phones.

So that's the history of online learning through five
generations, but it is also a listing of the major technologies that form the
foundation for sixth-generation e-learning, which I would characterized by the
Massive Open Online Course.

Let me spend a few moments talking about the development of the MOOC model.

When George Siemens and I created the first MOOC in 2008 we
were not setting out to create a MOOC. So the form was not something we
designed and implemented, at least, not explicitly so. But we had very clear
ideas of where we wanted to go, and I would argue that it was those clear ideas
that led to the definition of the MOOC as it exists today.

There were two major influences. One was the beginning of
open online courses. We had both seen them in operation in the past, and had
most recently been influenced by Alec Couros's online graduate course and David
Wiley's wiki-based course. What made these courses important was that they
invoked the idea of including outsiders into university courses in some way.
The course was no longer bounded by the institution.

The other major influence was the emergence of massive
online conferences. George had run a major conference on Connectivism, in which
I was a participant. This was just the latest in a series of such conferences.
Again, what made the format work was that the conference was open. And it was
the success of the conference that made it worth considering a longer and more
involved enterprise.

We set up Connectivism and Connective Knowledge 2008
(CCK08) as a credit course in Manitoba's Certificate in Adult Education (CAE),
offered by the University of Manitoba. It was a bit of Old Home Week for me, as
Manitoba's first-ever online course was also offered through the CAE program,
Introduction to Instruction, designed by Conrad Albertson and myself, and
offered by Shirley Chapman.

What made CCK08 different was that we both decided at the
outset that it would be designed along explicitly connectivist lines, whatever
those were. Which was great in theory, but then we began almost immediately to
accommodate the demands of a formal course offered by a traditional
institution. The course would have a start date and an end date, and a series
of dates in between, which would constitute a course schedule. Students would
be able to sign up for credit, but if they did, they would have assignments
that would be marked (by George; I had no interest in marking).

But beyond that, the course was non-traditional. Because
when you make a claim like the central claim of connectivism, that the
knowledge is found in the connections between people with each other and that
learning is the development and traversal of those connections, then you can't
just offer a body of content in an LMS and call it a course. Had we simply
presented the 'theory of connectivism' as a body of content to be learned by
participants, we would have undercut the central thesis of connectivism.

This seems to entail offering a course without content -
how do you offer a course without content? The answer is that the course is not
without content, but rather, that the content does not define the course. That
there is no core of content that everyone must learn does not entail that there
is zero content. Quite the opposite. It entails that there is a surplus of
content. When you don't select a certain set of canonical contents, everything
becomes potential content, and as we saw in practice, we ended up with a lot of
content.

Running the course over fourteen weeks, with each week
devoted to a different topic, actually helped us out. Rather than constrain us,
it allowed us to mitigate to some degree the effects an undifferentiated
torrent of content would produce. It allowed us to say to ourselves that we'll
look at 'this' first and 'that' later. It was a minimal structure, but one that
seemed to be a minimal requirement for any sort of coherence at all.

Even so, as it was, participants complained that there was
too much information. This led to the articulation of exactly what connectivism
meant in a networked information environment, and resulted in the definition of
a key feature of MOOCs. Learning in a MOOC, we advised, is in the first
instance a matter of learning how to select content.

By navigating the content environment, and selecting content
that is relevant to your own personal preferences and context, you are creating
an individual view or perspective. So you are first creating connections
between contents with each other and with your own background and experience.
And working with content in a connectivist course does not involve learning or
remembering the content. Rather, it is to engage in a process of creation and
sharing. Each person in the course, speaking from his or her unique
perspective, participates in a conversation that brings these perspectives
together.

Why not learn content? Why not assemble a body of
information that people would know in common? The particular circumstances of
CCK08 make the answer clear, but we can also see how it generalizes. In the
case of CCK08, there is no core body of knowledge. Connectivism is a theory in
development (many argued that it isn't even a theory), and the development of
connective knowledge even more so. We were hesitant to teach people something
definitive when even we did not know what that would be.

Even more importantly, identifying and highlighting some
core principles of connectivism would undermine what it was we thought
connectivism was. It's not a simple set of principles or equations you apply
mechanically to obtain a result. Sure, there are primitive elements - the
component of a connection, for example - but you move very quickly into a realm
where any articulation of the theory, any abstraction of the principles,
distorts it. The fuzzy reality is what we want to teach, but you can't teach
that merely by assembling content and having people remember it.

So in order to teach connectivism, we found it necessary
for people to immerse themselves in a connectivist teaching environment. The
content itself could have been anything - we have since run courses in critical
literacies, learning analytics, and personal learning environments. The content
is the material that we work with, that forms the creative clay we use to
communicate with each other as we develop the actual learning, the finely
grained and nuanced understanding of learning in a network environment that
develops as a result of our working within a networked environment.

In order to support this aspect of the learning, we decided
to make the course as much of a network as possible, and therefore, as little
like an ordered, structured and centralized presentation as possible. Drawing
on work we'd done previously, we set up a system whereby people would use their
own environments, whatever they were, and make connections between each other
(and each other's content) in these environments.

To do this, we encouraged each person to create his or her
own online presence; these would be their nodes in the course networks. We
collected RSS feeds from these and aggregated them into a single thread, which
became the course newsletter. We emphasized further that this thread was only
one of any number of possible ways of looking at the course contents, and we
encouraged participants to connect in any other way they deemed appropriate.

This part of the course was a significant success. Of the
2200 people who signed up for CCK08, 170 of them created their own blogs, the
feeds of which were aggregated with a tool I created, called gRSShopper, and
the contents delivered by email to a total of 1870 subscribers (this number
remained constant for the duration of the course). Students also participated
in a Moodle discussion forum, in a Google Groups forum, in three separate
Second Life communities, and in other ways we didn't know about.

The idea was that in addition to gaining experience making
connections between people and ideas, participants were making connections
between different systems and places. What we wanted people to experience was
that connectivism functions not as a cognitive theory - not as a theory about
how ideas are created and transmitted - but as a theory describing how we live
and grow together. We learn, in connectivism, not by acquiring knowledge as
though it were so many bricks or puzzle pieces, but by becoming the sort of
person we want to be.

In this, in the offering of a course such as CCK08, and in
the offering of various courses after, and in the experience of other people
offering courses as varied as MobiMOOC and ds106 and eduMOOC, we see directly
the growth of individuals into the theory (which they take and mold in their
own way) as well as the growth of the community of connected technologies,
individuals and ideas. And it is in what we learn in this way that the
challenge to more traditional theories becomes evident.

Now I mentioned previously that the MOOC represents a new
generation of e-learning. To understand what that means we need to understand
what the MOOC is drawing from the previous generations, and what the MOOC
brings that is new.

Let me review:

Generation 0 brings us the idea of documents and other
learning content, created and managed using application programs. In this the
sixth generation of such technologies we have finally emerged into the world of
widespread free and open online documents and application programs. The ability
to read and write educational content, to record audio and make video, is now
open to everybody, and we leverage this in the MOOC. But this is not what makes
the MOOC new.

Additionally, a fundamental underlying feature of a
connectivist course is the network, which by now is in the process of becoming
a cloud service. WiFi is not quite ubiquitous, mobile telephony is not quite
broadband, but we are close enough to both that we are connected to each other
on an ongoing basis. The MOOC leverages the network, and increasingly depends
on ubiquitous access, but this is not what makes the MOOC new.

The MOOC as we have designed it also makes use of
enterprise 'game' technology, most specifically the conferencing system.
Elluminate has been a staple in our courses. We have also used - and may well
use again in the future - environments such as Second Life. Some other courses,
such as the Stanford AI course, have leveraged simulations and interactive
systems. Others, like ds106, emphasize multimedia. Using these and other
immersive technologies, the MOOC will become more and more like a personal
learning environment, but this is not what makes the MOOC unique.

The MOOC also makes explicit use of content management systems.
The early MOOCs used Moodle; today we encourage participants to use personal
content management systems such as WordPress and Blogger. The gRSShopper
environment itself is to a large degree a content management system, managing a
large store of user contributions and facilitator resources. But clearly, the
element of content management is not what makes the MOOC new.

And the MOOC makes a lot of use of commercial social
networking services. Twitter feeds and the Facebook group are major elements of
the course. Many students use microblogging services like Posterous and Tumblr.
Like membership in a social network, membership in the course constitutes
participation in a large graph; contents from this graph are aggregated and
redistributed using social networking channels and syndication technologies.
But many courses make use of social networks. So that is not what makes a MOOC
unique.

So what's new? I would like to suggest that the MOOC adds
two major elements to the mix, and that it is these elements that bear the most
investigation and exploration.

First, the MOOC brings the idea of distributed technology
to the mix. In its simplest expression, we could say that activities do not
take place in one central location, but rather, are distributed across a large
network of individual sites and services. The MOOC is not 'located' at
cck12.mooc.ca (or at least, it's not intended to me) - that is just one nexus
of connected sites.

In fact, it is the idea of distributed knowledge that is introduced by the MOOC again, and the means of learning is really involved with this
idea
. When you learn as a network, you cannot teach one fact after another.
Each fact is implicated with the others.
You cannot
see a single
fact, even if you extract a fact from the data, because it would be only one abstraction, an
idealization, and not more true that
the identification of regularities
in the data - and
learning becomes more like a process to create landforms, and
less like
an exercise of memory. It
is the
process
of pattern recognition
that we want to develop, and not the remembering of facts.

Accordingly, the second element the MOOC brings to the mix
revolves around the theory of effective networks. More deeply, the MOOC
represents the instantiation of four major principles of effective distributed
systems. These principles are, briefly, autonomy, diversity, openness and
interactivity.

For example, it is based on these principles that we say
that it is better to obtain many points of view than one. It is based on these
principles that we say that the knowledge of a collection of people is greater
than just the sum of each person’s knowledge. It is based on these principles
that we argue for the free exchange of knowledge and ideas, for open education,
for self-determination and personal empowerment.

These four principles form the essence of the design of the
network - the reason, for example, we encourage participants to use their
preferred technology (it would be a lot easier if everybody used WordPress).

We are just now as a community beginning to understand what
it means to say this. Consider 'learning analytics', for example, which is an
attempt to learn about the learning process by examining a large body of data.

What is learned in the process of learning analytics is not
what is contained in individual bits of data - that would be ridiculous - but
overall trends or patterns. What is learned, in other words, emerges from the
data. The things we are learning today are very simple. In the future we expect
to learn things that are rather more subtle and enlightening.

Let me now say a few words in closing about Generation 6
and beyond.

From my perspective, the first three generations of
e-learning (and the web generally) represent a focus on documents, while the
second three represent a focus on data. Sometimes people speak of the second
set as a focus on the Semantic Web, and they would not be wrong. Data does not
stand alone, the way documents do; the representation of any object is
connected to the representation of any number of other objects, through shared
features or properties, or by being related by some action or third party
agency.

Indeed, if the first three generations are contents,
networks and objects respectively, the second three generations are those very
same things thought of as data: the CMS is content thought of as data, web 2.0
is the network thought of as data, and the MOOC is the environment thought of
as data. So what comes after data is pretty important, but I would say, it is
also to a certain degree knowable, because it will have something to do with
content, the network, and the environment.

Here's what I think it will be - indeed, here's what I've
always thought it would be. The next three generations of web and learning
technology will be based on the idea of flow.

Flow is what happens when your content and your data
becomes unmanageable. Flow is what happens when all you can do is watch it as
it goes by - it is too massive to store, it is too detailed to comprehend. Flow
is when we cease to think of things like contents and communications and even
people and environments as things and start thinking of them as (for lack of a
better word) media - like the water in a river, like the electricity in our
pipes, like the air in the sky.

The first of these things that flow will be the outputs of
learning (and other) analytics; they will be the distillation of the massive
amounts of data, presented to us from various viewpoints and perspectives,
always changing, always adapting, always fluid.

Inside the gRSShopper system I am working toward the
development of the first sort of engines that capture and display this flow.
gRSShopper creates a graph of all links, all interactions, all communications.
I don't know what to do with it yet, but I think that the idea of comprehending
the interactions between these distributed systems in a learning network is an
important first step to understanding what is learned, how it is learned, and
why it is learned. And with that, perhaps, we can take our understanding of
online learning a step further.

But that, perhaps, may take the efforts of another
generation.


Thank you.
Share:
All copyright reserved by Khondoker Hafizur Rahman. Powered by Blogger.

Total Pageviews

Search This Blog

My Blog List

Blogger news

About

Popular Posts

Pages