If you are not aware of the raspberry pi mini-computer, it is effectively, a single-board mini-pc where the single circuit board is less than 10cm long, 7cm wide, and, when my raspberry pi 4b is mounted in it's stainless steel case that has fins for heat dissipation it is still less than 2cm thick. It's original development, or creation was sort of related to the original OLPC = one-laptop-per-child approach, since they were focusing on the promotion of teaching basic computer science in schools and in developing countries.
For more general info, you can check out the wikipedia page on the project, or the raspberryPi.org website.
Either way, while a lot of the operating systems available for usage on the raspberry pi are linux-based, including their own raspbian flavour, and, while it can indeed operate as something similar to a full desktop PC, besides physical sizing, another approach to working with the raspberry pi is to customise both it's software, and make use of it and it's hardware in forms of robotics, which is also partly why I looked into it in the first place since I have been involved in a couple of different forms of software development over the years, and, yes, I am, at heart, a nerd, and thus the idea of robotics does also appeal to me.
On that note, I am also primarily an android smart device user, partly due to customisability, the variety of software available on that platform, and, especially with regards to forms of assisstive technology focused software targeting forms of accessibility for the visually impaired, like me myself, I made the choice to primarily target android usage quite a few years back, to a large part due to the android version of the vOICe from SeeingWithSound.com project by the Dutch scientist, Peter Meijer.
However, on that note, it is not really possible to just connect an external camera to an android phone, and have it made use of as the primary photography, or camera input channel, and, this is where this slightly alternative approach comes into play.
While I had already tried out something similar in the past, using the raspberry pi 3b model, it honestly seemd to not be able to handle the complete workload when the system was up and running the version of the android operating system, having spoken output happening all the time since it wasn't even terribly responsive when it came to just interacting with the operating system interface, etc., and, especially when I tried working with an external camera at the same time, it became even more obvious that the hardware was not up to a high enough standard.
This is why I decided to try it out with an upgraded, higher performance model, which is the raspberry pi 4b 8Gb model.
This model also includes built-in ethernet, wi-fi and bluetooth support/operation, and, has both 2 USB2, as well as 2 USB3 ports, dual micro-HDMI display sockets, has a socket that can allow you to make use of a standard set of earphones via their 3.5mm audio jack, and, sort of by default, can receive power from a USB-C cable, connected to either a wall socket adapter, or a mobile power pack - there is a lot more information available on the above specification page, but, these points are somewhat relevant to what I was trying to achieve, and the processes I carried out.
In other words, what I have here is the raspberry pi 4b 8Gb model, which should definitely be able to offer enough performance for my purpose, a mobile power pack that can offer up to something like 10000MAh charging overall via a USB-micro or USB-c cable in this context, and, while this unit does seem to require a display to be connected to it in order to have it power up completely, while there are HDMI dummy display emulators out there, that are not expensive at all, and, I am pretty sure are the size and shape of a normal USB flash drive, or dongle - will probably look into obtaining one - my current workaround for this is that if I plug an HDMI-to-micro-HDMI adapter cable into the one micro-HDMI display socket on the unit, with the other end connected to my VGA-to-HDMI adapter, without needing to have an actual monitor or display plugged into it, it will recognise that as a form of pseudo-display and proceed with the boot-up process.
In terms of an external camera, the raspberry pi should accept a USB cable connection with any UVC-compliant webcam equivalent - UVC = USB video device class - and, what it comes down to is that these cameras are more-or-less driver-less, or do not require specific software to be installed for most devices or operating systems to pick them up and make use of them. In other words, almost any modern webcam should do the job, which would allow people to make choices in terms of pricing, sizing, form factor, lens capability or angle, etc. quite easily, but, what I have here for primary testing is a pair of video camera sunglasses that I purchased online a couple of years ago that can either be used for recording video on a stand-alone basis, or can be connected to a device like the raspberry pi via their own USB-cable, but, it means they're literally just sporty-looking sunglasses that have a video camera embedded in the bridging, and, if I connect them to the raspberry pi, it just requires a standard-looking USB lead connected to one of the arms on the side of the glasses.
In terms of interaction with the unit, most common methods are to just connect a normal USB keyboard and mouse to the unit, but, while haven't been able to test this yet, lots of the mini-wireless keyboards, either operating via their own USB dongles, or via bluetooth should work fine, and, some of them have built-in touch-pads to implement what would equate to mouse interaction.
Firstly, after collecting the hardware mentioned above, and, while initially tried this out with the 17.1 = android 10 version, with some forms of success, but with a couple of minor ongoing hardware compatibility issues, I downloaded the .img operating system image file from the following page:
LineageOS 18.1 (Android 11)
LineageOS 18.1 (Android 11)
I then used the software named balenaEtcher to load it onto the 8Gb micro SDCard that planned to make use of here - a side note is since these raspberry pi's run off the micro SDCard as their primary storage device, you can literally swap between operating systems within more-or-less 30 seconds.
One trick that needed to be handled before could even get it to boot up the first time was to edit /boot/resolution.txt to set it to 1024x768 before it would then boot up with it having a micro-HDMI adapter cable connected to an HDMI-to-VGA adapter, connected to a VGA monitor, and then needed to have a sighted connection handle some of the first steps in the process on my behalf.
Once booted up, the sighted connection went through initial configuration wizard, and then used Settings -> System -> Advanced settings -> Audio device to set it to use the 3.5mm audio jack as default audio output - you should later on be able to tell it to work via bluetooth audio output if you wanted to, but, would not make that configuration change/choice right off-the-bat.
One trick when working with the mouse is to take it all the way down to the bottom of the display, and then click and drag upwards to open the list of installed apps, like files, settings, etc., and, this is both something my sighted counterpart, and I myself made/make use of at times. On most android devices, their home screen launchers offer similar functionality using either a single-finger or double-finger drag up from the docking bar.
A similar form of gesture you will also want to make use of at times is to drag the mouse cursor right up to the top of the screen, and click and drag down to then open up the notification shade.
I had copied various android packages' .apk files that had extracted, or exported from some of my other android devices onto a flash drive, and we installed a free version of the eSpeak text-to-speech engine, and the alternative, third-party android-platform jieshuo/commentary screenreader onto it, and I then had my sighted connection double-check that eSpeak could work, and that jieshuo had all permissions granted and that the accessibility service was enabled.
This meant that then, for the first time, I could now connect my earphones to the unit, and, actually start interacting with it somewhat independently, but, there were some more steps I wanted to work through that I would also require sighted assistance with.
Besides needing to work with a mouse and an external keyboard, some keyboard keystrokes to also take note of, are the following - these are workarounds to provide versions of some of the standard android physical buttons, or virtual buttons on some units:
- F1 = Home
- F2 = Back
- F3 = Multi-tasking (recent apps is the term some may use for this one)
- F4 = Menu (I think that this is similar to tap-and-hold, or right-click)
- F5 = Power
- F11 = Volume down, and F12 = Volume up
For example, if I want to initiate a software-based power-off, I need to hold in the F5 key, until the dialogue comes up with power off, restart, etc. options, and then if working with the mouse, use it to locate power off button, and, while you would need to double-click a file or home screen icon, just clicking on some buttons activates them, but, when working with at least google's own screenreader software, talkback, then it does seem like it will generally have shifted focus to this dialogue, so you can use it's element navigation keystroke combinations to locate something like the power-off button, and then use it's activate item keystroke to initiate something like this.
Along the lines of that, especially when it comes to some forms of software requiring it, and, since not everybody wants to side-load software onto any android unit, I specifically wanted to install the google play services, the google playstore, and then some of the official google products, like the google text-to-speech engine, google accessibility suite, which includes talkback screenreader, and, I wanted to try out other pieces of software I would then install via google playstore - some of them are also pieces of assisstive technology software have either purchased once-off, or have paid up-front subscriptions on, so they would also require google play services to work on this unit.
There is a package, or collection of patches referred to as the Open GApps project, and, on the page I downloaded the operating system image from, the developer does mention specific versions to obtain, and a process to install them after having loaded the initial version of the operating system, but, unfortunately, this does not seem to be a process a blind individual can carry out on their own.
Anyway, I did obtain a copy of the above collection, in it's zip file package, and then got my sighted connection to work via settings>about>build number menu item, where, on most android devices, you can double-click, or double-tap this item something like 7 times, which will enable developer options under the android operating system, and, once we'd done that, under the sub-category of menu items that now appears, named developer options, there's an option to tell it to boot into recovery mode - to give you an idea, I think this is something similar to safe mode under the windows operating system.
Once in recovery mode, while it was not reading out any of the interface to me, my connection could locate a button to initiate installation, and, locate the .zip file on a USB flash drive we'd connected to the unit, and initiate the installation before then rebooting back into normal operation mode.
Anyway, what this meant was that the unit now included installations of some of the google processes or services, including playstore, which meant could now get it to prompt me for my primary android, or google account, enter my details, and then install various pieces of software like google text-to-speech voices, google's android accessibility suite - while a lot of people prefer to work with jieshuo/commentary, I am myself just more comfortable with talkback, and, it is now my screenreader of choice, which activates immediately on boot-up, and, you will, as per usual, initially hear it making use of the google TTS voices, before it switches over to my speech engine of choice in this context, which is a free version of the eSpeak engine. This is a slightly robotic-sounding speech engine, but, it can handle over 40 different languages in terms of output, and requires a lot less processing power than some of the more natural sounding voices you can install, or work with on the android platform.
Since I am, for now, primarily making use of keyboard interaction with the unit, some talkback-specific keystrokes to bear in mind are that alt+space will activate the talkback menu, alt+left and right arrows will perform the same action as if you were swiping with one finger on a touch screen, and, alt+enter performs the equivalent of a double-tap to activate an element, where alt+shift+enter performs the equivalent of a double-tap-and-hold, but, you could also configure changes to these keystrokes under talkback's advanced settings, or, like mentioned above, if working with some of the mini-keyboards with touch pads, you should be able to work with that form of interaction, and, you can also use the mouse cursor since both talkback and jieshuo/commentary read out what it hovers over.
In any case, I could then also start installing some additional pieces of software I wanted to try out playing with, including some of the following - have left these installed on the image that I can provide for download, since they seem to, at least, try to operate as they should on the unit now:
There were some of the additional assisstive technology, or object recognition apps I tried out for very short time-periods, but, they didn't necessarily seem to want to work on this platform, or provided somewhat strange results when asking them to interpret input from the external camera, but, if anyone has any ideas or suggestions, I would be only too happy to try additional pieces of software, if haven't already tried them. Something else to bear in mind is that some of these pieces of software require, or rely on online queries, so you would need to have the unit connected to something like a mobile hotspot offered on one of your other devices - this was how I was operating during the installation processes in any case, using it's built-in wifi connection capability.
On that note, some of the time, the vOICe itself seems to operate almost exactly as I would like it to, with no noticeable performance lag, no real lack of camera view processing, etc., but, it seems like it is, possibly, taking a scan or two to then catch up with changes to the camera input.
However, one thing specifically tested, or tried out, was checking if, after making sure the unit was online the first time I launched it, to allow it to possibly implement some additional resources, in the background, it does in fact offer a form of real-time text recognition. That is just one of the forms of selling points I mention to people to explain why, besides the augmented reality for the blind approach to sensory substitution, that it offers, but, which does require time spent training your brain to recognise it's audio patterns, they might find it of immediate interest.
Below I will try to provide an audio demonstration where you can listen to some examples of making use of this now combined device, and, I will try to explain what steps I am taking as it happens - nothing special, but, more or less a bit of an example trying to explain why I was trying this out in the first place, and, how most of the interaction takes place with it at the moment.
On that note, however, let me explain that the local cost of the same raspberry pi 4b 8Gb model I have here is currently less than ZAR1500 on average, and, you should be able to get hold of both a mobile power pack and a UVC-compliant, and therefore compatible webcam for probably less than ZAR250 each, and, then along with a couple of other additions, the total cost of this combination should not even come to more than something in the range of ZAR3000-ZAR4000 (plus-minus US$200-US$275), which, when considering what it can possibly offer in terms of a fully mobile object and text recognition gadget for a blind/VI person, which doesn't have to be completely obvious, or blatant, and which wouldn't require you to be holding something in your hand, when out-and-about, should make it obvious why I am trying to sort this whole combination out, especially when compared to some of the other commercial products out there - especially when you take their pricing into account.
And, yes, the audio recording quality below is not perfect since it was just something I did quickly, without too much preparation or making use of anything like an actual recording studio, but think it should still give you enough of an idea.
If you want to contact me directly to ask for more details, etc., please work via either one of the mailing lists I am present on, or, via the contact form on this site, and, I will be quite happy to provide you with a download link for the pre-configured, or tweaked .img file if you'd like to try it out on your own device, but, I will definitely request feedback, suggestions, etc., etc., since my focus is now to try figure out how to work around what seems to be a form of inconsistent system resource overload, with the inconsistency thereof being the most important aspect at the moment?
On that note, if you are also locally based here in South Africa, and you'd like more information on the hardware, or if you can provide me with information relating to sources for said-same hardware, etc., please also feel free to get in contact with me.