Forcing a write to a file without newline?

Hello, I am writing a program which runs with root privileges, and it creates a child with lowered privileges and has to redirect it's stdout and stderr to a file and then run bash.

The problem is, whenever I read this file, I want to see all of the current output, even when the program is still running, but when the child writes anything to stdout or stderr, it does not appear in the file until a newline character is written, and then the whole line is actually output.

Here's the code (i tried things like O_RSYNC, O_DSYNC etc to hopefully cause it not to buffer the lines):

		pid_t pid = fork();
		if (pid == 0)
		{
			//if (setgid(uid) == -1)
			//{
			//	cerr << "could not set group; exiting for security reasons." << endl;
			//	return 1;
			//};
			//if (setuid(uid) == -1)
			//{
			//	cerr << "could not drop privileges; exiting for security reasons." << endl;
			//	return 1;
			//};
			int fd = open(outputFile.c_str(), O_RDWR | O_CREAT | O_RSYNC | O_DSYNC | O_TRUNC, 0644);
			dup2(fd, 1);
			dup2(fd, 2);
			close(fd);
			cout << "terminal-server starting";
			if (execl("/bin/bash", "bash", NULL) == -1)
			{
				cerr << "execl error" << endl;
				return 1;
			};
		}
		else
		{
			int code;
			waitpid(pid, &code, 0);
			cerr << "The shell has terminated with code " << code << "." << endl;
			break;
		};

The line 'cout << "terminal-server is starting" ' does not cause anything to be written to the file at all, ut if I add "<< endl;" to the end (to output a newline), the line is written. But bash writes the command prompt and waits for input, and the command prompt does not appear in the file.

I hope I explained the problem clearly enough. Is there a way to stop it from buffering the output? Thanks.

That is because the output stream is buffered...so you need to follow the open with a call to either setbuf or setvbuf and specify in their that the IO stream is unbuffered...

... or flush STDOUT after adding the text to the buffer and before the execl().

The "cout" and "stdout" streams may not be the same.

If you want real control over IO streams, C++ "cout" style is NOT the way to do it.

setbuf( stdout, NULL );
printf( "String to stdout\n" );
fprintf( stderr, "String to stderr\n" );

And even then, if you're doing multithreaded/multiprocess writes to the same file, printf() and fprintf() are NOT atomic calls - the data from multiple calls to printf()/fprintf() can be interleaved. If you want atomic writes where data from a single call is guaranteed to be contiguous at the end of the file, such as for a log file, you need to open the file with standard C open() in O_APPEND mode, format your data yourself, and use write().

And if you're doing your own log files like that, now you have to handle the log file getting too big. It can't be deleted because you have an open file descriptor on it - if someone does try to delete it all they'll do is remove it from the directory and the file will stay on disk until your process closes its descriptor.

There's a reason why library calls such as "syslog()" already exist.

You're not going to come up with a better logging system. Redirecting stdout/stderr is a logging scheme just like a Lego house is a place to live. Sometimes that's OK. For important processes, it's not.

I thought the C++ cout iostream and the C++ C library stdout standard I/O stream were required to be synchronized by default. I know that an application writer could use:

    sync_with_stdio(false);

to disable that synchronization and make the stdout stream independent of the cout iostream, but I didn't see anything in the code samples shown in this thread indicating that cout and stdout are not synchronized in this application.

You are, of course, correct in noting that iostreams and stdio streams are not appropriate in a multi-threaded application where I/O between threads to a single stream is not coordinated. But again, this thread was just talking about flushing a (buffered) prompt written by a process before it overlays itself with a new program by invoking execl(). If you have multiple active threads running in a process when one of those threads calls execl(), that process has undefined behavior all over the place.

Simply flushing stdout will not do anything, because when I execute bash, I also need its output automatically flushed. Would setbuf(stdout, NULL) cause bash not to buffer the output?

C file streams are not "global", they're limited to your own program. That's why your text "disappears" when you fork before flushing, because it's sitting in your buffer, which ceases to exist the instant you execl.

So the problem is a library your program uses, not stdout itself. stdio is fundamentally an "assemble lines for me in memory instead of calling write 200 individual times" library. You could bypass it and use file descriptor 1 directly with write(), but printf and its relatives are handy, no?

bash does a good enough job of managing its own streams, I don't think you need to worry about it as long as you properly wait() for it to make sure it's done printing and quitting and all that.

You are wasting your time trying to control how a shell does it output from outside.
you can set and flush all you want but when the shell starts it can change any of those settings,
or even implement it's own buffering. You have no idea.

If you are doing anything that needs to be really robust a shell is not the answer.

To the contrary, a shell is very very robust with streams -- handling streams is kind of its job.

I think you can reasonably expect a shell to finish writing its buffers before it quits.

I am not expecting the shell to quit at all. I want it to not buffer its stdout output, just like it doesn't buffer it when writing to console, because I am indeed trying to implement a web-based console. I want ALL of bash's output to be written to that file so that it can be read and displayed on a page. But the command prompt does not appear in the file until the command is sent to stdin, which makes no sense.

Okay.

Have you actually tried it? What happened?

I suspect the prompt is written to stderr, not stdout.

pardon?