Sorry, I'm not so familiar with mySQL, but as you didn't get an answer yet!
This is based on Oracle, but the same ideas should relate.
Firstly it's safe to update a table with however many rows. The worst that could happen is that the update fails and the uncommitted changes are rolled back.
You could:
split the updates in to smaller batches
add commit statements every few hundred / thousand statements
instruct your sql client to commit every few hundred / thousand statements
add more redo / undo logs or make your existing ones larger, or both
switch off logging on the table (alter table .. nologging if mySQL supports that) to reduce redo / undo (in most cases)
disable referential / unique constraint indexes (more for performance)
I think I am going down the road of creating a script to run in the background (as a cron task) and commit every X thousand records. I just didn't want to run the update all in one go as this would impact performance and increase logs made.