Also, you can have the best of both worlds, saving results in a pipeline if it's helpful for debugging, just running the updates in place once you know it all works well. This is particularly helpful if your data is complex and your datasets are largish.
I am currently merging various sources that were not originally meant to work together, creating a common reference system for fairly complex data. I find it helpful to write a bunch of simple updates that can be executed sequentially, running them with a .bxs file. If I want to see the output after any of these updates, I can do so with an OUTPUT statement. Most of these stages are only useful for debugging purposes, so once a pipeline is working well, I get rid of the OUTPUT statement.
If your data is large, it can also be helpful to have two different statements for importing the data, one that imports everything, another that imports only a small test subset that runs quickly. Comment one of them out and use the other. For rapid development, use the small, quick subset, then run it on the whole dataset once it works.
Jonathan