Don't be afraid of libraries. In any 'language'. You see, you can either build a menu or window from scratch every time you need it, and calculate the length of a string on the fly, or you can make like easier (and, seemingly paradoxically, your code smaller) by utilising libraries.

One thing that I do recommend you do is take the time to work through your library code and debug it. Very few libraries are entirely 'bug free', and the last thing you need is to spent a long time trying to resolve a quirk in your code that originates in a library.
To give an example, the wonderful image conversion software ChangeFSI will error when loading a certain type of TIFF file. This is not a fault of a library. I have not patched my copy but if I remember correctly, it is the result of a simple typo in a piece of code that is only used under certain conditions. Now, it will seem - to a programmer - to be a fairly trivial thing but the same can happen with library code. So save yourself trouble, and make sure the library is bang on spec before you use it.



Backup regularly. It is possible, if you do something foolish like blanking a file full of code, to recover the lost data from the now-unused part of your harddisc. But I would not rely upon it. I know this from experience. Also, scanning many megabytes of data looking for the right bits is not a job I'd wish on my mortal enemy. I had to recover a lot of stuff from a failed 512Mb partition. Trust me, you do not want to do that.

So. Backup your data. I know you won't back up daily. Heck, I don't do it - and I've had enough disasters to convince me I should. But what I have done is to write a program to extract the sourcecode and resources from my projects and copy them to an iomega zip disc. This, done weekly, is - as far as I can see - good backup. If you use the 'Newer' Copy option, it flies along.
(the program is called '!AutoBack', it is not polished, but if you would like a copy...)

Remember, you don't need to keep executables. If you do not have sufficient data to generate a full executable, then you aren't copying enough. I copy across only the current development notes and sources and resources. From that, the executable can be built.
If is a good idea to make a copy of the assembler/compiler/linker and any tools that you use. Ignore what the software licence conditions say, you ARE (under EU law (assuming you live in the EU, pity our US readers!)) entitled to make reasonable software backups. I consider a bare-bones copy in your source backup as being 'reasonable'. Anybody who doubts me, consider getting hold of, say, objasm 2.00 or Norcroft C version 4. Good luck!

Try a CD-ROM burner. They'll hold 650Mb. While they are a one-shot media, and not really updatable, they're a damn sight cheaper than most other methods of storing data, they are reliant, and the storage capacity has a lot going for it. You can then just dump your ENTIRE coding partition/drive onto a CD-ROM.
But don't be cheap. Keep a weekly backup cycle going. Use the previous discs for reference for earlier copies of the software. Failing that, Maplin sell clock mechanisms cheaply. Turn your old CDs into clocks and give them to your parents!
If you are really paranoid about source issues, stick your old CD in a microwave oven, for about two and a half seconds (650W). Of course, I won't be liable if anything blows up, but on the up side, nobody will read that CD ever again...

Always keep a backup with you. My site backup is held on a small harddisc slung out the back. The zip disc lives in my backpack. If I used CD-ROMs, I'd bury one a month in the garden. It may sound like I've lost my mind - but think if your equipment catches fire. Nobody wants to think about situations like that, but can happen, as can a total disc failure. And it helps to have thought about it, and considered your options.
While a disc failure can, to a large extent, be cured by fitting a new drive (or, in extreme cases, buying a new computer) and installing from backup... I can't think of a feeling worse than not only losing all of your equipment and possessions, but also the realisation that all of your data is gone. For me, that would mean a serious chunk of the last decade of my life. Thus, I ensure that a copy of the important stuff goes where I go. Now, I can be involved in a serious RTA and also a fire; which would take care of both copies; but I kinda figure in that case, my data would probably be immaterial!


Source releasal

When you officially abandon a project, release the source into the public domain. Maybe it'll be ignored, maybe you'll help a newbie learn, maybe those people that use your software will be able to maintain it themselves.

I have an agreement with a friend that upon my untimely demise, he will interrogate my computer and my backups and make all of my non-commercial programming fully public.
You see, the way I see it is that companies and individuals stop coding - for whatever reason, from death to a change in life (like they got married and don't have time for it) to they simply abandoned the platform - and they take with them a lot of source.
Consider Computer Concepts. They left the RISC OS market and took with them several incarnations of a highly rated desktop publisher, and an artwork design program. All of their code was in assembler, so it'd probably be hellish to work through. However had they released the code to the open source concept, then I'm sure somebody would keep the software alive; more so than the current addition of plug-ins.
It just seems, to me, to be a terrible shame to squander all of that time and development. After all, if you are leaving, what do you have to lose by making your old code Open Source? Let your legacy live on.

I wish to point your attention to http://www.drobe.co.uk/codevault/. It isn't chock full of legacy code available for programmers to update and maintain, but it is a damn good start!
I wish to congratulate everybody who has had the foresight and vision to make their older source code publically available, and I hope that soon more companies will make their old no-longer-maintained software available for enthusiast programmers to develop.



Somewhere I read that 10% of time is spent coding, and 90% is spent fixing the code.
When you write in BASIC or C, you are speaking to the machine in a pseudo-english language. It has strict rules, but if...then...else is fairly easy to follow.
With assembler, you are speaking to the machine more in its own language. The nmemonics are provided to assist you (the computer doesn't actually understand 'STMFD', it still undergoes a translation), however now we are at a state where every instruction you type translates to one processor instruction. Where concepts such as strings and arrays cease to exist. Few of you will be programming the bare metal (ie, generating text by poking data directly into memory and communicating directly with hardware - this is mainly an art that is part of operating system and/or device driver design), but you are still operating at a much lower level. You have the full gamut of access to the system, you have the ability to fiddle with hardware in a way that high level languages only dream of. The entire computer is your oyster.

This brings with it a severe penalty. That of responsibility. Now Linux users will advocate that a user program should never be able to access supervisor mode. Likewise, you should not be able to stiff the machine with one instruction. That's all very well and proper on their Linux. That is not RISC OS. My personal feelings are that RISC OS is more of an enthusiasts operating system. Sadly, source code is not available (one big plus for Linux), but RISC OS almost goes out of its way to provide full and uninhibited access to the machine.

Thus, you must be responsible.


When you code in a high level language, it may not work first time.
When you code in assembler, I'd be surprised if it worked first time. I knocked up a little bit of code to scan and load Fresco cache files (part of QuickVoy) and I was very surprised that it worked flawlessly the very first time.
Don't be discouraged - it is very simple to make a mistake in something as deep as assembler. It is all part of the process. It is a learning process. Every time you track down that bug, you have discovered a little bit about yourself, and about the inner workings of the machine.
If your bug is a silly mess up on your part, you'll need to try harder next time!
But if your bug is one of those subtle I-spend-three-days-on-this problems that stems from deep within the system itself, then don't fix it and carry on fixing. That's bad. Fix the deep bug, then take a moment. Think about the mental odyssey you just took. And for God's sake be sure to pat yourself on the back. Those big obscure ones are the worst. And when you are in the zone chasing it, you are no longer thinking like a code monkey, you have become a hacker and you are meeting the hardware and firmware on its own terms. Very little may change as far as an observer can see, but inside your mind, incredible things happen. So always be sure to take a moment. It is moments like that that make it worthwhile.

Chances are, if you are reading this, it is because you do this by choice - not because your employer expects you to. So you have some inclination of what I'm on about. If you do not, then don't be afraid to try!

Maybe I'm totally mad, but I find the very worst thing is a blank !Zap window awaiting me to type in some code. It actually depresses me. I much prefer to add to existing code, to optimise code (either in the HLL, or by adding chunks of assembler to speed things up), or by debugging.


Failures can be categorised into one of three types...

It totally fails
This is probably the simplest to fix. This is because something glaringly obvious is going wrong.

Something is happening, but not what it supposed to happen...
This is harder to fix, and may be due to:

It nearly works...
This is a hard one to fix, as the errors are going to be a lot more subtle. Some ideas...

Return to assembler index
Copyright © 2004 Richard Murray