I didn’t love it. So, I changed it:
I think it’s better.
It’s just what it says. It’s not that awesome yet, but I realized while I was embedding YouTube videos over and over again into Canvas that I really could use a cheat sheet to remember appropriate embed sizes that respect a clean 16:9 aspect ratio. They’re not all here – I left out a couple nice ones, but these are most of them, with the most common sizes used in our LMS embeds in bold. I hope it’s useful!
In a peer review session for course developments recently, the question of making Blackboard courses look a little bit nicer came up. Designers Madara and Tina had some good examples and some good advice on implementing images on the external parts of content folders (usually used for holding unit and weekly content). I looked into this a bit and figured out that while it’s not possible to actually choose other icons other than the defaults in Blackboard, you do have the option to turn them off and replace them with your own.
This little video loop shows what that looks like once you’ve inserted your own images into the body of the post (left-justified):
Chromebooks are awesome machines, but they only really shine when coupled with a robust internet connection. If you’re a faculty, staff, or student using a Chromebook at UAF, you should be on the faster and auto-authenticating eduroam network, not UAlaska. The authentication process is straightforward but does take a few steps.
Before you start make sure you have internet access through another network. If you are on campus, sign on to UAlaska and gain access with your UA ID and password.
Navigate to connect.alaska.edu and log in again with your UA ID and password
Click on the +Advanced link to expand. Two types of certificate will be shown: Root certificate and Identity certificate. You need one of each. Download the first of each: Where it says CRT and where it says PKCS12 format.
In Google Chrome, type chrome://settings/certificates into the address bar. This will open up a Certificate manager window.
Step 4: In the window, under the Your Certificates tab, click on Import and Bind to Device at the bottom.
You will be asked to enter a password. Enter your username (before the @ mark) as the password.
Click on the Authorities tab at the top of the window. You may see a bunch of folders and text in the window – ignore this and take a deep breath. Skim to the bottom of the window and click the Import… button. Import the other file you downloaded, rootCA.crt
A window will pop up asking you if you trust that certificate. Of course you do. It’s your friend. Click all three checkboxes and hit OK.
Shut down your Chromebook and restart it.
After booting up in no time at all, go to your network connections at the bottom right of your screen and select “eduroam”as your wi-fi network.
The final step. Before eduroam will let you into the party, you need to give it a little more. A window will pop up titled “Join Wi-Fi Network” that has several fields you need to input.
It should look like this:
Click Connect. Welcome to the Internet.
The second day of the conference started off with Dave Cormier’s keynote on rhizomatic learning. His argument is that learning happens via community, and argues a notion of “community as curriculum.” His main conceit is the metaphor of the rhizome, a sprawling, colony-like and nearly indestructible plant. This is a deliberate juxtaposition to the view of learning as a “tree of knowledge” that exists separate from others.
Dave asked the audience several big questions and gave us time to discuss among ourselves. There was a very active Twitter backchannel displayed on the board and he actively encouraged us to post our answers to his questions there, using hashtag #2015DL.
Question #1: Is learning something that we should even be trying to measure?
To paraphrase: “We cannot measure learning. It’s not possible simply because we need it. Maybe we can only get a representation of learning, not a measurement. It can be evaluated, demonstrated, and reflected upon, but not measured. We’d be better off to measure effort, because we actually can do that.”
Question #2: Do we need to be teaching the right answer?
No one had the answer to this.
Question #3: What do we want school to be FOR?
I liked this keynote a lot because Dave did not claim to have a method for us, or some canned set of answers. He freely admitted that we’re in a profession with a lot of ambiguity and that changes in teaching and learning are happening fast. It was refreshing. I really appreciated one of his final quotes:
“Learning is not a process of finishing, but of never finishing.”
The remainder of the conference consisted of small sessions. For the sake of brevity, bulleted lists of major points made in each session follow. I do not necessarily agree with all of these points, but found them worthy of consideration.
“We have the tech to open up students to new worlds, we just are making terribly ineffective use of it.” -Gord Holden
This session was led by Steve Dotto, a Canadian presenter on technology and TV personality. He has a tech-tip YouTube channel with a wide viewership and certainly knew his stuff. The session was focused entirely on Screenflow, so I attended only the first half and then snuck to a concurrent session on Virtual Worlds.
The title of this session was “Vygotsky and Bruner on Roddenberry’s Holodeck” and focused on what the presenter, Gord Holden of Heritage Christian Online School, termed VLEs – Virtual Learning Environments.
In his course, students really do build Rome in a day:
Tools used that are safe for K12: Activeworlds, Quest Atlantis, Thinking Worlds.
While all I really have here is a list of tools, I came away convinced from this session that the future of online learning is probably on what we will eventually come to realize is the Holodeck. I was so interested in this part of the conference that I am hope to dedicate another blog post to it.
These are all tools that were mentioned either directly or offhand by presenters and other participants that could be worth using:
One of the first projects I was given responsibility for since being hired here at UAF eCampus was the production of video teaser and a promotional segment for a Massive Online course UAF is offering in the fall. The course is excellent, and I encourage you to check it out and recommend it to anyone you know who is interested in biology, animal research, or the human condition of obsessive compulsive disorder.
These projects were a good opportunity for me to refine my rusty video editing skills and to learn how to use Screenflow, which is a fairly powerful screencasting and video editing application. While not intended as a dedicated NLE (non-linear editor), Screenflow has many of the same capabilities, and its slick WYSIWYG interface makes it easy to put together small projects very quickly.
For example, this 36 second video was very easy to create in Screenflow:
This is what my timeline in the Screenflow project looked like:
My next project after this was to make a longer, 3-4 minute promotional video that went into more detail about the course. I continued to use Screenflow for this, but it soon became apparent that the software was not designed well to handle large projects. Organization was harder and harder to manage, and I came across several ways that I could crash the application every time. Adjusting audio levels in clips caused the program to crash, each and every time. That is a problem. Additionally, Screenflow stores all project media inside the Screenflow file itself. This makes management and archival easier, but it can be a real pain when you are concerned about hard disk space. Finally, some of Screenflow’s default behaviors for clip management are not as sophisticated as I have come to expect from Adobe Premiere or Final Cut Pro. Maintaining consistent syncing across the timeline is important, but Screenflow’s “close gap” action only acts on that layer in the composition, not the entire sequence. Considering that my timeline had 10 layers of tracks in it, this function was essentially useless to me. Instead, I would select all clips I wanted to retime, and drag them accordingly. The ability to nest and group clips was partially useful to avoid this, but some of the hierarchies were not clear, especially concerning audio levels, which seemed to lag behind the settings changes I was making.
My timeline in the longer video:
The longer 4 minute promo video:
Screenflow is a great piece of software, but I have already vowed to myself that for my next complicated project longer than 5 minutes or with more than a bit of timeline complexity, I will try my best to get up to speed in Adobe Premiere.
I am extremely proficient at getting Screenflow to crash, even (especially) after the newest 5.0.2 update:
From Sunday April 19th through Tuesday April 21st, I attended the 2015 Digital Learning Conference in Vancouver, British Columbia. It was a worthwhile time but rather odd to be attending as an American working in post-secondary education. I would say I came away with some new ideas on how to approach the problem of learning, and a few rather situational lessons from the specific context of this conference. I’ll start with the latter three, because they really shaped my whole experience at this conference.
A K-12 Focus
Online education in the K12 sphere is very well established in BC and this conference was 90% focused on digital learning in that K12 context. The majority of the attendees were faculty or staff of online or “virtual schools” based in the province. I met only a few other university affiliated educators and designers here. The state of practice for the most part seems to be quite advanced, with a huge emphasis and transition toward Moodle as an LMS.
The Agony of Continuous Entry
Many or most of these online schools are operating on a “asynchronous continuous entry” model, in which the school accepts students into courses on a continuous basis and allows them to complete the course as quickly or as slowly as they wish. I discussed this with a few teachers, and it came up prominently in a few sessions and the final panel discussion. It seems awful to me, and no one I met seemed to like it. Additionally, there are no regulations on class size limits or on students per teacher. This was a gripe expressed by many. One attendee worked at an online school that had 4,500 students. A large question addressed at the conference was how to create community in classes like this. Some teachers had found interesting ways to mitigate the fractured sense of community in these classes, such as designating “module masters” who guide newer students through previous modules that the more experienced student has already completed, but it seems almost impossible.
Bizarre Educational Repercussions of 9/11
Public schools in British Columbia
are severely limited have different legal considerations in the online tools that they can use in their classrooms due to the 1996 Freedom of Information and Protection of Privacy Act (FOIPPA). The law regulates the privacy of public information and after a 2004 amendment passed in response to the USA Patriot Act, requires that any such data reside solely on Canadian servers. My initial understanding of this law’s application was somewhat misinformed. Schools are able to use a wide array of online tools, including Google Apps for Education, but they have some special considerations regarding student and parent consent. The struck-through text following this was what I had initially understood. That was wrong. This forces public schools to either find Canadian-based services or manage their own servers. This may be one reason there is a wide adoption of Moodle in BC. Google services, on the other hand, are not used at all in online schools in British Columbia. Private schools do not face this limitation and I heard several teachers discuss the effective use of hangouts and Google+ communities in their classes. This creates a rift between what kinds of affordances are available to teachers practicing online within the same province, and to me seems like a setback for those public schools.
Thanks to Julia Hengstler @jhengstler (mentioned in the following paragraph) for reading this post and pointing out my misperception. Thanks to Breanne Quist @quistb for her clear description of PIPA and FIPPA on her Privacy Compass site.
This is an issue that spans generations. It was addressed in fairly good detail in a series of presentations by Julia Hengstler, and her former graduate students Kristin Sward and Breann Quist. They represented Vancouver Island University’s Online Learning and Teaching Graduate Diploma, in their Centre for Education and Cyberhumanity. While I did not entirely agree with their extremely safe approach of always getting students and parental consent for every online tool that is used, they espoused what I thought was a pretty progressive attitude toward the whole concept of online risk. Julia Hengstler made a great point about how we tend to overplay risks online. In assessing risk, it is not necessary to outline everything that could possibly go wrong. For example, a teacher asking for parental permission for a physical field trip to a museum would never think of adding “you understand that a flasher might expose themselves to your child in the bathroom.” Yes, it is a risk, but it doesn’t fall into the realm of reasonable risk. The other excellent takeaway from these sessions was that we should encourage learners to actively create and manage their digital footprints, because while it may seem safer to have no footprint at all, this is actually riskier because you are not staking a claim to your own digital territory and online identity. The more positive and well-curated content you have to actively represent yourself online, the less likely you are to suffer from what others are able to add to that footprint secondhand. One concern that particularly interested me was about second language learners creating digital footprints before they are proficient in the target language. The language products and artifacts that they produce as beginners could be detrimental to them years later when being evaluated for higher education or applying for jobs.
Kristin Sward and Breanne Quist presented projects from their own graduate work focusing on these topics. Kristin created a web course in Digital Citizenship and in her current role as an Educational Technology Facilitator she makes use of a service called Everfi which uses badging to educate students in the area of digital literacy (they have Canadian servers for her and others). Breanne’s project is called the Privacy Compass and it is a very in-depth database of preapproved tools for use in her school, with consent forms in multiple formats for each tool, and a description of each. Rather than whitelisting or blacklisting certain tools, this allows teachers in the school to use any tool that they wish after it has undergone the full risk assessment.
Upon my recent return to Alaska after years spent living in Japan, one of the largest annoyances for me has been the slow, expensive internet access.
Well, this week I’ve been back in Japan where I have been using our home connection. It’s gigabit fiber, fast and hard to max out. The bandwidth bottlenecks are typically at the remote server, or our own in-home wifi.
I just thought I’d chronicle the speeds I am getting here, both with servers inside Japan and to UAF.
Tests with a domestic (Japanese) server:
1. Computer connected to the fiber router via ethernet cable (gigabit):
(this is an old test from last year, because my current laptop doesn’t have an ethernet port)
2. Computer connected via 802.11n wifi:
3. computer connected via 802.11a/b/g wifi:
Test with a UAF server:
1. Connected via 802.11n wifi:
As you can see, the connection with UAF is quite good. It’s just fast enough to warrant being on an 802.11n network. The first test is simply a proof-of-concept and I doubt users will see those speeds for actual content, although it’s maybe possible for services delivered directly by our fiber line provider, NTT, or our ISP NTT Plala. We are paying a total of 6,500 yen per month for unlimited access, although I do believe that our ISP engages in some traffic shaping, which is legal in Japan, which has deregulated and opened up lines to any provider who wants to sell service on them. Anyway, it’s been nice to use this connection while I am here this week.
UAF eCampus has been lucky to have a Double robot loaned to us for the past few weeks. We love to explore new technology here, so having a telepresence robot was more than cool. I had been able to drive it around a few times before, once from Japan when it was in London, but being able to see in person how others use it and respond to it, and to interact with others via the robot was very enlightening and I’ve learned a lot.
Over the past two weeks, I’ve tried to peel away the layers of techniness, newness, and flashiness that new technologies naturally have, but that can sometimes obfuscate shortcomings of the device or simply be distractions to the fact that the device is really nothing new at all.
Well, I think that the Double is something new. I think it’s the real deal. It does have some shortcomings, but the device, if used purposefully and creatively, has some clear affordances that cannot be had with other telepresence technologies.
First, the robot provides real autonomy for the remote user, as they are perceived as a physical entity. Provided the user is in a friendly environment, they have complete control over the robots movements, which means that they can enter and leave both rooms and conversations at their own will. They can
also enter the robot at will, and leave at will. When I took the Double to a training session at the UAF U-Park building, a colleague was able to call in and make her presence seamlessly part of the room. The robot wasn’t there until she was. It had no presence until she arrived. She didn’t disturb the flow of the room by arriving, and had no problem leaving. It’s almost like she had simply walked in, and then walked out a while later. It was surprisingly natural.
Second, the Double give a remote user a great amount of control. This really enhances their sense of being present in the physical location. During a visit to the UAF Museum of the North last week, Director of Education and Public Programs Jennifer Arseneau remarked how much more natural it was to give the robot a tour than if she had been holding a tablet and pointing it around. In that situation, the tour may have been disorienting for both parties involved, and lines of eye contact would have been back and forth between the device and guide. The remote presence would have been slaved to the physical guide, not independently negotiated between them. This ability to be in control of one’s own field of view, position and speed makes one consider and feel the space they are in.
Third, the robot provides real opportunities for meaningful interaction. The possibility for simple interaction is obvious, of course, but it’s almost too obvious, so that our imaginations stop at a rather uninspired point. Here are some of the obvious scenarios:
These are great uses, and probably justify the purchase of a Double, because it does them well. The Kodiak Island School District is making great use of Doubles in this way. But they are not groundbreaking concepts. This dynamic has been in use for half a century, with students attending class and receiving lessons over shortwave radio, such as in the Australian Outback and Bush Alaska. Essentially, these uses simply extend the existing paradigm of students and teachers in classrooms.
At its core, the Double device is an iPad on a Stick on a Segway. It’s a remarkably simple design and not incredibly groundbreaking when considered as an assembly of parts. So why can the Double afford users something that existing technologies cannot? It’s because it makes use of our human tendency to treat the robot as a person. And unlike existing technologies, it is present. Let’s not even use the term “virtual presence” or even “telepresence.” There’s nothing virtual about the robot. There’s nothing virtual about the person on the other side. They are both real. And after a little time with it, the physical interactors start to treat it more like a human than an iPad on a Stick on a Segway.
There are some downsides and some concerns. While the base model is $2,499, I think that both the audio kit ($99) and the charging dock ($299) are absolutely essential. Without the audio kit, the remote user has no ability to assert themselves aurally, and in noisy environments is effectively mute. Without a charging dock that can be entered and departed with the controls of the robot alone, the end user is at the mercy of whoever in the office remembers to charge the robot at night, and if they call in while the Double is still plugged into the wall, then they’ll need to get someone’s attention or (shudder) call them on the phone to say “Hey, come unplug me!”
There are some practical hurdles and things to be aware of:
There are some social and usability concerns:
Some features that would be nice:
I think that these are not dealbreaking concerns. Some of them are not the fault of the engineers or designers of the device, but social realities. Hopefully people will treat the robot as kindly as they would a human. In terms of features Double is actively adding new ones, such as screensharing with multiple users.
The Double is obviously a first-generation device, but like the Apple II or original iPod, it is going to set the standard for future devices of its type even though it is still worth buying Version I. I hope to one day get to drive a Double II, but for now, it’d be great to have a Double I around the office.