I have this working code here, which sorts an array of floats. Both arrays are the same apart from the location of the decimal point. Apparently the qsort function works depending on the amount of decimals that the individual floats contain:
// Compilation: gcc qsort.c -g -lm -Wall -o qsort
// Usage: ./qsort
#include <stdio.h>
#include <stdlib.h>
float changes[] = { 8.37, -9.197, -3.9662, 4.946, -3.6095, -2.534, 1.693, 2.1133, 2.3198, 8.21 }; /* affected by compare */
//float changes[] = { 0.000837, -0.009197, -0.039662, 0.004946, -0.036095, -0.002534, 0.001693, 0.021133, 0.023198, 0.000821 }; /* unaffected by compare */
int n = sizeof(changes)/sizeof(changes[0]);
int compare (const void * a, const void * b) { return ( *(float*)a - *(float*)b ); }
int main () {
printf("Unsorted: ");
for (int i = 0; i < n; i++) { printf("%.6f ", changes[i]); }
printf("\n");
qsort(changes, sizeof(changes)/sizeof(changes[0]), sizeof(float), compare);
printf("Sorted: ");
for (int i = 0; i < n; i++) { printf("%.6f ", changes[i]); }
printf("\n");
return(0);
}
Output of first array is correct:
$ ./qsort
Unsorted: 8.370000 -9.197000 -3.966200 4.946000 -3.609500 -2.534000 1.693000 2.113300 2.319800 8.210000
Sorted: -9.197000 -3.966200 -3.609500 -2.534000 1.693000 2.113300 2.319800 4.946000 8.370000 8.210000
Output of second array is incorrect:
$ ./qsort
Unsorted: 0.000837 -0.009197 -0.039662 0.004946 -0.036095 -0.002534 0.001693 0.021133 0.023198 0.000821
Sorted: 0.000837 -0.009197 -0.039662 0.004946 -0.036095 -0.002534 0.001693 0.021133 0.023198 0.000821
What is this behaviour called? How do I correct it to work on the second sample-array?