C++ Segmentation error with bi-dimensional array

Hello,

I'm experiencing a weird seg fault at run time when initializing a bi-dimensional array and initializing a class.
Please see below code and comments describing the error and the instances when it occurs and when it does not occur.

Compiled with

g++ segf.cpp -o segf

output

On Fedora 16 x86_64
Output of

sizeof(int)

: 4

#ifndef __WEIRD_SEG_FAULT__
#define __WEIRD_SEG_FAULT__

class A {
public:
  A(const int _idx);
  
private:
  int m_idx;
};
#endif

A::A(const int _idx) {
  m_idx = _idx;
}

int main() {
  /*
   * If class A is not called at all,
   * no seg fault occurs,
   * with array indices of any size.
   */
  A * a = new A(1);
  /*
   * Below indices do not result in seg error.
   */ 
  //wchar_t t[65536][16];
  
  /*
   * Indices defined as below do not result in seg fault.
   */
  
  /*const int nb = 17;
  wchar_t t[2 ^nb][nb];*/
  
  /*
   * Indices defined as below result in seg fault.
   */ 
  wchar_t t[131072][17];
  
  /*
   * Indices defined as below result in seg fault.
   */ 
  /*const int r = 131072;
  const int c = 17;
  wchar_t t[r][c];*/

  return 0;

}

Any input about what's happening is welcome.

TIA.

Edit : I don't get any errors at all with a previous g++ version

You can't just declare arrays with dynamic sizes. It's not a feature you can trust to work. I've had to go through and fix code which did that before.

Also, a 131072 x 17 array would be rather large, perhaps too large for your stack segment.

#include <math.c>
...
...
  const int nb = 30;
  wchar_t t[(int) pow(2, nb)][nb];

If class A is not instantiated, no seg fault occurs, up to nb = 30 (above is int upper limit), on both Fedora 16 and Debian squeeze.

If class A is instantiated, seg fault occurs with nb > 16, except nb = 30, on Fedora 16 (g++ (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2) AND g++ (GCC) 4.6.2 20111027 (Red Hat 4.6.2-1)). On the Debian Squeeze host(g++ (Debian 4.4.5-8) 4.4.5), seg fault occurs with nb > 17.

Whether we need such huge arrays is not important, such behaviour is just erratic.

I'll just wait for next g++ bug fix release, it seems to be g++ version related.

Thanks.

You're writing code that you've already been told won't work and been told why it won't work, and are still complaining that it's crashing... That makes it a programmer error. C is unusually freeform for a language -- it will happily chop off its own foot if you tell it to -- and the answer is to not chop off your own foot. If you ask for a 3-gigabyte object on the stack, it will give you a 3-gigabyte object on the stack, even though that may be thousands of times larger than the maximum size your system will let you have and cause crashes whenever you actually try to use anything beyond a certain point.

Use the stack as intended, and it won't crash. Use it not as intended, and you are into the realm of 'undefined behavior'. Pushing the limits is going to tell you more about your system than the language itself... It's not erratic. The results of such strange code are quite predictable in a way -- you know they won't be stable in all compilers and systems. Many compilers won't even compile it, since array dimensions are supposed to be constants, not lvalues, and certainly not function return values cast from floating point!

1) You are allocating enormous things on the stack. This is a no-no. Don't depend on having more than sixteen megs of stack space -- which is a system setting, not a compiler one, so the same executable may crash on some systems but not others! You're trying to allocate entire gigabytes which obviously wont' work everywhere, or even be possible on many systems -- you're approaching the limits of 32-bit segment size. Obviously this is going to be a problem. If you want to allocate large amounts of memory, this is what malloc is for. It can even give you an error code, letting you tell when you asked for impossible amounts instead of just crashing your program.

2) You are dynamically generating the array size. This is also a big no-no. Most compilers won't let you do that at all -- array sizes are supposed to be constant non-lvalues. That gcc lets you get away with it is mostly an accident, and not reliable.

3) I don't think you know enough about the language to be critical of it yet. You're still using floating point functions instead of the built-in bit shift operators to handle bits.

Yes, it took me some time to understand that g++ has been lenient, none of the compiled code should have run with such huge arrays.

Not my intent to be critical at c++, not at all !

I would have preffered that rubbish code doesn't get compiled at all, a compiler is supposed to be a non-tolerant tool.

Anyway, thanks for your input.

Well, that's the thing -- it's not always rubbish. The compiler makes few assumptions about your system. It's supposed to be used in many more situations than just generic user-mode code -- you can make bootloaders out of it for instance, and even entire operating systems. It just does what you tell it to.

So the compiler doesn't second-guess you. It'll let you make a 3-gigabyte stack array on the assumption you wouldn't have asked for it if it weren't possible...

Variables inside array size descriptions is nonsense though. Some compilers it works, some compilers it doesn't, some compilers it crashes. I get the impression it might be interpreting the values you put in as something other than the literal values of the integers themselves -- their addresses perhaps.