by chrish » Tue May 16, 2017 6:05 pm
Another quick note, this time a good one. For the last couple months I've been working on a game here at UNM in our main Library. In it we used both bluetooth beacons and AR targets. We had the launch for the game last week, and everything went well. In particular we had AR targets that were photos of real perspectives in the 3D world our players would need to scan (as well as 2D images that were reproduced). The AR worked quite well even with differences in lighting and focal length across devices and play.
The only thing that is not entirely intuitive is that ARIS does not record one as having "viewed" the trigger unless the user taps the AR image on screen as it is being viewed (and then sending player to the linked ARIS object. I understand that the locks correspond to these underlying objects, so maybe that's how they fit in, and I'm n to saying it should work differently necessarily, just that we had to train players to tap on the image so they could get further in the game. So other authors might want to know about that too.
We had some inconsistency with the BT beacons across devices and playtests (variable delays before a beacon would trigger), but I don't know how much of this is ARIS. Toggling BT on and off on the device would jog itinerant devices pretty well.
Overall though, these new location detection schemes are pretty great. I have high hopes not just for their technical reliability, but also that they can be the basis for fun and useful dynamics in location based play that we have not yet before tried out.