Home

Welcome to my home page. I became blind at birth. I started programming computers at a young age. I also earned my general class amateur radio license, KA3TTT, a hobby to which I have returned with great joy. I practice Qigong and consider myself a Taoist. I use Linux as my desktop and Android as my mobile OS. I eat gluten-free vegan meals. For the rest you'll have to read my blog.

A Cranky Review of the Latest HIMS Note Takers

November 12, 2012

I feel cranky. I didn’t get a chance to meditate. My cleaning lady can’t speak english. My neighbor’s dog howls incessantly. I ate some old Indian food which made me feel gross and bloated. And to top it off, I just saw the latest note takers from HIMS.

For those who don’t know, specialized devices exist for the blind to help them take notes. Many have since migrated to laptops or iDevices, but some still love their note takers. The best note taker came out in the nineties, called the Braille ’n Speak. Since then nothing has surpassed it. The devices today feel hacked together and heartless, with little regard to design or aesthetics. The HIMS line of products continue this unfortunate trend.

A friend of mine brought over two note takers for me to try. I started with the HIMS Braille Sense U2. I don’t know what the letter U and the numeral 2 stand for, but they reminded me of the incredibly profane and incredibly funny Negativland song. The case actually impressed me, probably the most well-made part of the whole package. The note taker itself felt solid in the case.

The layout confused me a little. It has eight dots, which I had to get used to, since the Braille ’n Speak only had six. The keys felt kind of wobbly to me, not the solid feel of the Braille ’n Speak. Each cell has a routing button above it. The keyboard has two sets of two scrolling buttons, one at each side and a little above the display. This allows one to use the braille display uninterruptedly. It has some media playing keys on the front. It also has six function keys which I didn’t even notice until my friend pointed them out to me.

The Braille Sense as its name implies also has a braille display. I’ve never used braille displays since they cost so much money and since I never read braille that quickly. However, my friend told me that a lot of blind programmers like them for reading code. I looked forward to trying that, but first I would explore the note taking functions. I did notice that the dots on the display felt rather sharp. I think I’d prefer them a little more rounded.

I switched it on with the big sliding switch on the front. It played a sound and I found myself at the main menu. I enjoyed using the familiar space-dot-1 and space-dot-4 commands to scroll through it. The scrolling buttons also did this, and the routing buttons directly take you to a choice. I liked that.

Things went downhill from here. The file manager acts sort of like the one found in Windows. You have to use Space-tab (dots 4-5) to move forward a tab and Space-B (dots 1-2) to move back. This and other little conflicts from the Braille ’n Speak confused me. On the BNS, E-chord would act like the enter key, and Z-chord would act as the escape or abort key. On the Braille Sense, E-chord acts like the escape key, and Z-chord ends the program. Dot-8 acts as the enter key. And yet, Dot-7, which often acts like a backspace, does not act like the escape key. Little inconsistencies like this began to add up.

On the Braille ’n Speak, you could type letters in combination with the space bar, called a chord. For example, E-chord means to braille E while also holding down the spacebar. In addition, the Braille Sense uses dots seven and eight as modifiers. This means three times the number of potential commands. It also means three times the confusion.

It might not matter so much, except for the minor fact that the HIMS offers NO form of keyboard identification. Most screen readers offer a feature where you can type a combination of keys and it will tell you what they do. For example, on a Mac you can use CTRL-Option-K to enter keyboard identification mode. If you then hit CTRL-Option-L it will say: Read current line, reads the current line in the VoiceOver cursor. The Braille Sense does not offer this crucial feature, a major oversight. They do offer a menu of commands, but navigating through a menu takes time and does not offer the same level of interactivity. Plus, some of the menus had multiple commands with the same key.

With mounting dread I tried the word processor. Again I enjoyed typing as if on a BNS, but it still felt different. They did do one nice thing by adding the lower-G-chord to read the current paragraph. We wanted that and they never did it on the BNS, so at least HIMS got that right. Too bad the rest of the editor doesn’t follow suit. And too bad I couldn’t easily find out what a key does.

The abundance and inconsistency of commands really showed themselves. I could never remember which modifier to use. In the good old days we just had one, now I had to deal with three. I had the biggest laugh when I realized they use the X-C-V characters for cutting/copying/pasting. This makes sense on a QWERTY keyboard, but makes absolutely no sense in braille. This simple example said it all.

I noticed something interesting about reading lines. Dot-1 and dot-4 move by lines as they should. However, the line breaks correspond to the bounds of the braille display. The scrolling buttons behave this way and that makes sense, since the buttons’ proximity to the braille display suggests a relation. However, when using the keyboard and speech, the line boundaries should happen at line breaks.

When using a one-dimensional output such as speech, line breaks may not necessarily apply. When reading text you more often want to read by sentence or paragraph. Reading by line really only makes sense when reading code. On the BNS you could type very long lines, so reading by line would read by a more meaningful unit, since you would just place the line breaks where you wanted them. Putting the line breaks at the edge of the braille display introduces more gaps in concentration.

I felt dazed and confused. I typed a paragraph of nonsense and saved the file. It does offer a number of formats, though I don’t know how well it does at exporting to them. The save dialog box reminded me of the save dialog you’d find on a PC or Mac, where you tab between the filename, format, and the save button. I got used to that and went back to the main menu.

Their email client behaved pretty much as I’d expect. You tab between mailboxes and messages. I didn’t actually try it with an account, but it did have some messages in the sent box, so I did get to try that. It seemed pretty straight-forward. I just felt too scared to link it up to one of my accounts. Maybe next time I’ll create a stupid test account for testing this crap.

I tried their calendar. I found it very confusing. I enjoyed browsing around the calendar with braille commands, but something just didn’t seem to make sense. At some point my friend told me to his F6 or something, one of those little function keys I didn’t even know existed, and it brought up a menubar. I found out how to add an event. Except they don’t call them events, they call them schedules. So you add a schedule which actually adds an event on the schedule. Then somehow I got stuck in the dialog. I could not get out of it. I added one event, and the dialog just did not go away. I added another garbage event. I tabbed and shift-tabbed around. I tried to use the stupid E-chord which remember now acts like escape. Nothing would get me out of the dialog. I let my friend play with it, who has more familiarity with the unit, and they could not get out either. I had to use Z-chord to kill the program. I started to feel cranky.

I had a similar experience with the address book. I tried tabbing around, added a record, and got stuck in another dialog. This had happened twice in a row now. I could not get rid of the goddamn thing, and had to use Z-chord to kill the program. I sense sloppy coding.

I once again found myself back at the main menu, feeling even crankier. I wondered what the database manager did. It lets you keep records of things. You add different fields then add records using those fields. I felt too scared to try.

The Braille Sense has an FM radio. From a technological point this seemed like a cool idea. It even worked! This actually impressed me. Too bad radio died in 1990.

I finally found the web browser way down towards the end of the main menu. I ominously wondered if they put it near the end because it sucks. Of course, the feeling turned out true. Note taker manufacturers: if you want your device to have any hope of competing against iOS, then you must offer an equivalent web browsing experience. Nothing but the best will do.

I went to a site I knew well, my own. I did get a kick out of seeing it in braille, but the novelty soon gave way to incredulity. The braille display will prepend “ln” to a link, for example “ln my own.(1/15).” I wondered what the numbers meant, and soon realized that they referred to the link number and the total number of links. But wait, I thought I had way more than 15 links on my page. Sure enough, I soon started seeing (16/15) and so on. What a joke!

As you might now expect, it has lots of stupid stupid stupid commands for navigating.Backspace-B takes you back a heading, backspace-f takes you forward a heading. I get the back/forward thing, but it provides an inconsistent set of commands. And it didn’t even announce the piece of text as a heading.

Other commands had only slightly more logical keystrokes, but the whole thing felt clunky, and it would take me quite a while to memorize everything. Amazingly, hitting the backspace key would not take you back a page as it does in every modern browser. No no, to do that you hit backspace-p. Terrible! And remember, no keyboard identification, just pages and pages of cryptic commands.

At some point while flailing around, I accidentally hit one of the little buttons on the front. It started recording me. I hit the stop button and then the play button and it played it back while locked in this weirdo menu. To its credit, it does have a good microphone and stereo speakers.

I had had enough of the note taking, so moved on to using it as a braille display. I tried pairing it to my iPad, and it worked well. Everything actually behaved as expected in iOS. I decided to explore using the braille display on the Mac, especially for coding. This also worked well. I could read the computer’s text, and even type in grade II. The terminal even worked, and I did enjoy reading code in braille. I just don’t want to spend $5995.00 to do it. Yes you read that price right.

I had begun to get hungry. When I get hungry I get angry. My recent experience amplified my anger. We decided to order food and try the other note taker.

The Braille Edge 40 looks cute. It has a thin profile, a nice braille display, scrolling buttons, and two sets of four-way arrows. I really wanted to like this thing. Sadly, nothing about this worked as expected.

I don’t know how many ways I can write this. Nothing I did worked. I tried hooking it up to my Mac. In Terminal it actually brailled out “greater-than” instead of a greater-than sign (>). It even brailled “space”. Turning on eight-dot braille helped a little, but not completely. And while typing, hitting the backspace key did not take you back a character, it inserted a Y-umlaut (ÿ). Pathetic.

iOS faired even worse, if such a thing could happen. Scrolling the text did not work at all. The arrows did not work at all. Nothing worked. The page says it works with VoiceOver, but don’t believe it for a moment. I consider this blatant false advertising. And the price? Oh only $2995.00. Have I gone insane? I didn’t even try the note taking functions. After my experience with their full fledge note taker I felt paralyzed with fear at the thought.

This concludes my cranky review of these two note takers. I hope it will deter a blind person from wasting their money. By the way, my friend has promised to bring over more note takers in the future, so get ready for more “fun”. Sorry for the negative article. I hope you still enjoyed it. I promise to have an amazing positive article next!

The Beginner’s GUide to Echolocation

November 08, 2012

I have some very exciting news. Ever since I started learning about echolocation I wanted a way to get started myself. I made contact with Justin at World Access for the Blind and he helped me on Skype before we did my amazing life-changing intensive. Still, we all agree that we need a way to easily teach the blind about echolocation, or at least give them enough information to get them started safely. We also need to prove to the skeptics that it really exists. Now someone named Tim Johnson has written the perfect book to get you started.

The large print version of the Beginner’s Guide to Echolocation: Learning to See with your Ears sells on Amazon, though as any blind person knows by now Amazon does not care about accessibility. Fortunately, he has also made an accessible version available, so long as you can read MSWord documents. The accessible version costs twenty-three dollars. He also has an audio version available for $37.00. It contains the complete text, plus demonstrations of the different types of tongue clicks, an essential point. This has become the more popular version, and with good reason.

Many colorful quotes decorate the book by such luminaries as W. B. Yeats and Albert Einstein. People who use echolocation pick these lofty heroes for a reason. This skill represents something truly amazing, something which will completely shift your sensory paradigm and move you into a better place psychologically.

He emphasizes the importance of meditation. Simply allowing yourself to listen to the sounds which surround you can help train your brain. I love meditating, and have begun writing a book about it myself. It seems that echolocation activists also share an interest in opening the third eye through meditation. This does not happen by accident. By the way, eating superfoods also helps.

I also liked how the book uses music as a reference. You can practice listening to music as a way to boost the range of your hearing. You use the same skill to sort out signals when doing echolocation. Music also uses a lot of reverberation, and these echoes have some similarities. Understanding how sound and music work will aide you in your understanding of how echolocation works.

The book presents many of the same exercises Justin had me do over Skype, as well as some of the things we did on our first night. As the book points out, everyone perceives echolocation differently, and will have to arrive at their own understanding and ways of explaining it. I liked how he had exercises to do individually, but also ones which require a partner. Having someone else holding the objects introduces an unknown element, something vital for your progress. Every blind person faces unique challenges. You need to push yourself just enough to make small mistakes so you can correct them and grow more confident.

I found it interesting that he suggested opening a car window and listening to the echoes to get a sense of echolocation. It does not give quite the detail of accuracy as a tongue click, but it most certainly works. A woman who taught me as a child reminded me that we would ride in her car with the windows down, and I could tell her about the passing telephone poles. Of course at such a young age I did not think of it as echolocation, but it makes perfect sense. Even a simple exercise such as this will prove its validity.

I felt most interested in the discussion of using the visual cortex of the brain to build non-visual imagery. This sounds like what I experience. When I say I see something with echolocation, I really mean it. I actually see the dark form of an object. For me it also has a strong synesthetic component. In other words, if I click against a glass surface, I will get a cool feeling that reminds me of glass. You have to learn to open yourself to these unique sensations to truly succeed.

The book ends with some recommendations of what to do next. Again, they strongly recommend the three-day intensive of which I’ve raved extensively. Along with World Access, they also list an organization in the UK called Visibility. If you have done everything up to this point, you will have a good background for approaching these organizations for further training. Now the excitement really begins!

If you’d like to see the potential of echolocation, then buy this book and try the exercises. Think about it, wouldn’t you pay twenty-three dollars to begin to learn how to see? THis book will show you just that. For the full experience you’ll need to do an intensive, but this will let you know if you should think about the more serious commitment. In my opinion you really can’t lose. If you can hear then you can see! Go for it!

Links as Language

October 27, 2012

Recently I attended a talk put on by the Philadelphia Area New Media Association, entitled <Links as Language: how Hyperlinks are Changing the Way we Read and Write.> I found it very interesting, and it started me thinking about the unique way a blind person perceives the internet.

The event took place at the Wharton School of Business, the oldest and according to many the finest business school in the country. I emailed ahead of time and the organizer of the event said they could accommodate me. I got a cab there and listened to the cabby bitch about his Nutribullet juicer, which gave him a headache. I used echolocation to find my way inside and to the desk, where they got someone to escort me to the room.

I arrived early for a change, and greeted a few people. Pizza and coke arrived, and I accepted some, though beer or water would have suited me better. I set up my MacBook Air and prepared to enjoy.

David Dylan Thomas began by showing a sentence. I asked him to read it aloud since I can’t see it. He did, and continued this practice throughout. This got me thinking about accessibility right off the bat.

A lot of the presentation revolved around the concept of artful linking. Links act like metaphors, and you can use them as an effective writing tool. Linking to something in a clever way delivers a reward. It also makes more sense from an accessibility perspective.

He said that a hyperlink has words underlined in blue. Honestly, up to this point I never knew this. I don’t see the web, I hear it with a screen reader. To me, a link just has the word “Link” or “Visited Link” prepended to the name. For example: I don’t see the web, I hear it with a, link, screen reader.

I have noticed this construct become embedded into my internal dialog. My subconscious uses it as a way to indicate a link to another thought. External technology imitates internal technology. The internet acts like an external form of telepathy. It serves as a perfect metaphor for the collective consciousness.

These thoughts blended perfectly with the talk. Soon he asked a great question: “Who can tell me the two most useless words in a hypertext link?” Of course I knew the answer. “Click here!” A bunch of people seemed to agree.

Once again it brought home the notion that accessibility really affects everyone. To me, click here makes no sense. Until recently a blind person could not click anything. Now someone can on an iPhone/iPad, or if using a magic trackpad on a Mac, but for the most part blind people do all their navigation using the keyboard. Thus it means nothing.

He then asked how many people had someone teach them how to use a hyperlink. A few tentative people said yes. Then he asked how many people just instinctively knew how to use a hyperlink. Of course most did. Then he said my favorite sentence of the presentation: “Click here is postmodern. It’s like a stop sign that says ‘This is a Stop Sign.’” People already know how to use a hyperlink. You don’t need to insult their intelligence.

This got me thinking back to my first web browsing experiences. I think it happened on an online service called Delphi. Back in the good old days of 1994 we just had text terminals, none of this fancy graphical nonsense. We had to scroll through a page at a time. The text contained bracketed numbers [23]like this. At the page prompt you could type in the number to follow that link. And we loved it.

We also loved playing games. These online services had the first multiplayer online games. I particularly remembered one on GENIE called Federation II. I spent lots of money “studying for school” when that game came out. But when you think about it, we played the first multiplayer online games, and it just seemed so cool.

I’ve also always enjoyed interactive fiction. These text adventure games print a description of a room, and accept text input from the keyboard. They began in the seventies, peaked in the eighties, went underground, and now have begun to resurface partly thanks to portable devices such as the iPhone and iPad. They combine a story with source code in amazing ways. My interest resulted in an interview in the excellent documentary Get Lamp. I recommend it if you’d like to know more about interactive fiction.

With that thought, we can now explore the idea of the web as a text-base gamespace. If you picture a page as a two-dimensional space, you can consider hyperlinks as the third dimension, or Z axis. The links connect the levels. Just as text adventures foreshadowed video games, the web foreshadows a virtual reality. The links act like connections in the brain. The web behaves a lot more like an artificial intelligence than many of our contrived attempts. We’ve already done amazing things with augmented reality, overlaying the web on the real world. One day we may do the inverse, modeling the real world and its objects on the web.

He took a quick aside which I felt good about, so I will detour at this point as well. More often than not, restaurants just link to a PDF copy of their menu. I have called PDF the Pain-in-the-Ass Document Format since it came out in the late nineties. The worst experience happens when the PDF just contains an image scan of the menu, as opposed to the actual text. This makes it impossible for a blind person to read. David made the point that restaurants should stop thinking of it as just a simple posterboard, and more of an opportunity to give a whole interactive experience. I agree!

Technology has changed so much. When I started going online, bulletin board systems acted like village pubs. Online services came along and felt like little cities with shopping malls. The internet connects things in an even greater way. To me, putting something on the web sometimes feels like installing an art exhibit in a public toilet. David chose a more elegant metaphor, like a star in the Milky Way. Both work.

The tools to author hypertext have also evolved. For a long time, inserting a hyperlink meant putting in raw html code, <a href=“http://behindthecurtain.us”>like this</a>. I didn’t particularly mind, though it made the text far less readable. Emacs came out with a way to do it which worked well but still felt clunky. Now on my Mac I just select text with Shift-Command-left/right arrow or with VO-Enter. I then hit Command-K and insert a link. This works in many standard applications such as TextEdit and Mail, and it also works in MacJournal. This increased ease means increased use of hyperlinks. The ease of reading also increases artful link text.

We do have some problems we need to overcome. Right now, we have so many file formats. This already creates problems, and this will increase as time goes on and more data becomes irretrievable. We also need to solve the problem of persistence. If a page changes its address then links which pointed to it will become invalid. Use of URL shortening services has made this worse.

David closed with a good point. In 1999 links just took you from one place to another. Now things have become less linear. Instead of thinking of the web as just a place to put our stuff, we should think of it as a place to connect our stuff. This really wrapped the whole thing up well for me.

I have only addressed the major points and how they relate to my own interests. I recommend going to this talk yourself if you have the chance. No doubt you will come away with something valuable. I feel glad I went.

Unfortunately, I don’t think I can go to next month’s PANMA talk, which discusses Flash. You can easily guess my opinion of that. Fortunately, an event called BarCamp Philly will happen this weekend, and by all accounts I have to go. I hope if you go that you will introduce yourself to me. Their ticket system has already given me problems, which BarCamp’s staff has done their best to resolve, so already I see the beginnings of a good article. See you at the pre-party, hopefully.

Echolocating Sculpture: A Monument to Abstraction

October 27, 2012

About six months ago I learned a skill called echolocation. By making a tongue click, a blind person can learn to see with reflected sound. Read that article first if you haven’t, as this one depends on it. Only one organization in the world teaches this skill: World Access for the Blind. They deserve your support. Every blind person who can should learn it.

During the intensive, my teacher Justin said that I could use echolocation to see sculpture. This intrigued me. Of course, I immediately wanted to go to the Rodin Museum and try it out. Justin said I should do it myself later so we could work on more practical things. I agreed, but really wanted to go. Today I had my chance.

My father runs the Seraphin Gallery. Once in a while he will ask us (his kids) to go to an art opening. Usually they have paintings, which obviously I can’t get too excited about. At least I’d get free wine. This time however he said they would have sculpture, so that peaked my interest. I told him of my plan to use echolocation to try to see sculptures.

Most art museums will not let you touch the sculptures, sometimes even getting quite mean about it. I recall a field trip to the Philadelphia Museum of Art. They really didn’t want me touching their precious sculpture, and made me wear gloves. This totally took away the appeal. Marble feels muddy under cloth. Too bad I didn’t know echolocation then.

Fortunately, tonight’s opening did not happen at the Philadelphia Museum of Art, and I could touch everything, since my father owns the gallery. I even got to have a chat with the sculptor, David Borgerding. I felt excited that I could touch the pieces, but I felt just as excited about trying echolocation to see something abstract.

I walked there with a friend of the family named Alex, who should have a blog of his own. We entered the gallery and I just started echolocating to find sculptures. I felt like a kid searching a room for treasures. And sure enough I found them!

They arose like dark forms, monuments of abstraction. I could scan and make out the major features. After I used echolocation, I would then allow myself to touch them and get the fine details, then I’d go back to echolocation to appreciate it at a distance and with a holistic perspective. You know that tale about the blind men touching an elephant? That would never happen with echolocation, which lets you see the whole structure instead of its discrete parts.

I saw a lot of waves, and appendage-like forms. Even the squares did not have perfectly square shapes. No right angles, just curves. The artist’s statement confirmed this. I gravitated to two in particular. The first reminded me of a sailboat. The second one reminded me of the monument to abstraction I referenced above. David actually took this picture himself, so there you have a picture of a sculpture taken by the sculptor. This one also had an amazing texture, since he made it out of bronze and polished it somehow. I think gold also played a part.

Hearing about these colors reminded me of another visual aide, the Color ID app I have previously used to watch a sunset. It accurately identified the colors of the metals as I passed the iPhone over the sculpture. The app has exotic colors which I enjoyed in this artsy setting, especially Almond Frost, whatever that means. The simple colors proved more practical in a basic sense, grays and browns mainly. Now I had three ways to appreciate this art: touch, echolocation, and the Color ID app on my iPhone. This gave a very complete picture.

While discussing all this with the artist and others, I realized something else. Normally I use echolocation in every-day settings, such as finding a path, following the shoreline of a building, or enjoying the organic patterns of a tree. Now for the first time my brain saw something completely abstract. It tried to put names to the forms but ultimately could not. The artist agreed saying: “It’s nothing, and it’s everything.” This made for a novel experience. The visual centers of my brain felt satisfied and saturated.

By this time the crowds had begun filing in, making echolocation less effective, especially for appreciating aesthetics. The wine went to my head and I felt like eating. Alex and I walked to a nearby restaurant. By the time we returned, the showing had ended. I look forward to appreciating sculpture again, especially now that I can see it. I like sculpture!

RubyMotion Rocks!

October 04, 2012

Ever since I started using an iPhone, I have wanted to learn how to write apps for it. I made several attempts to learn Objective C, but it never worked out. Then one day I learned about RubyMotion and it changed my life forever, just like the iPhone itself. I have just finished the tutorial and have a basic understanding of how to write an app. RubyMotion rocks!

After Steve Jobs left Apple, he formed a far out computer company called NeXT. They developed what they hoped would become the next amazing computer, especially in educational institutions. They wrote a custom operating system called NeXTSTeP. They created a new programming language for it called Objective C. It combined the standard features of C with a unique object oriented syntax, including keyword arguments.

When Apple bought NeXT and hired Steve back, they decided to use NeXTSTeP and Objective C. This became the core of what we now know as Mac OS. It then found its way into iOS. To this day, many objects start with NS, such as NSString and NSURL. The NS, of course, stands for NeXTSTeP. Seeing NS always reminds me of the whole story, and how one never knows how their accomplishments and actions will influence the future. NeXT failed, but their work succeeded.

Since I wanted to write apps, I had to learn Objective C, or so I thought. As I have written previously, I’ve never had good experiences with C. It reminds me of a bad relationship – you try to make it work, but it just doesn’t.

I began to assume I could never write apps, but remained hopeful.

One day, I read an article on Cult of Mac about a hot new iOS programming course called Tinkerlearn. It attracted my attention because it has the lessons within comments in the source code. Programming languages include a way to leave comments which the language ignores. This lets us mortal humans document our code. Embedding lessons in comments seemed very creative, not to mention highly accessible. It cost $14.99 so I figured why not? I bought it and fired up XCode, Apple’s development environment, the program you use to write apps.

In the past I have joked about a correlation between programming in C and drinking beer. This applies to all dialects of C. In the case of Objective C, forget the language, I felt like I needed to drink a beer just to use XCode. I don’t know how it looks to sighted people, but to me it seemed like a very complicated program to do a very complicated task, with lots of very complicated controls and strange areas of the screen to do all sorts of esoteric things, when I just wanted to write and compile a program. And this from an Emacs user! Nevertheless I trudged on, and started to get into the course.

I emailed Parker, the author, and we struck up a dialog. I told him of my hope to write apps, and the challenges of a blind developer. Specifically I wanted to know about designing interfaces programatically. Sighted people use something in XCode called Storyboards to visually lay out the screens of the app, then they add hooks to these elements. VoiceOver could read none of it, so I actually have to create the interfaces with raw code. Some sighted people also prefer this, and Parker actually released a modified version of one of his lessons specifically to teach this. I felt overjoyed, but confused.

After I wrote an article about how much I loved Ruby, Parker wrote me back on Twitter, agreeing with me. I wished aloud that I could write apps in Ruby, fully knowing of Apple’s restrictions. Parker wrote back and asked if I had ever heard of RubyMotion. It lets you write iOS apps in Ruby! Really? And it uses standard terminal utilities! Really? And you can use your favorite text editor. Really? I emailed Laurent Sansonetti,, the author of the program, and he said if it didn’t work he’d refund the $200 price. I figured I’d spend that on headache medicine if I continued learning Objective C, so took the plunge.

Have you ever visualized something, but just figured it would forever exist in your imagination, only to one day find out that it really does exist? You get a very weird feeling actually seeing it on the physical plain. I felt exactly like that when I first got RubyMotion working. It felt like the spirit made flesh, like a dream made real, and like the way I could finally write apps. Welcome to the future!

I just finished the tutorial by Clay Allsopp. The entire RubyMotion community feels exhilarating, and has given the utmost help in my unique situation. I can’t say I can write an app, but I actually understand the basics. Most importantly, I understand the way different subviews combine in a main view to make what you see on your iPhone when you run the app. I still have a lot to learn, but for the first time the pieces have started sliding into place!

One day you will see my apps in the app store. I have lots of ideas to change the world and make people laugh! RubyMotion makes it all possible. I’ll take Ruby and Emacs over Objective C and XCode any day! Dreams really do come true.

Turn the Page

Search my Weird Life