Difference between revisions of "Two pass assembly"
(brain dump) |
m (Caretaker: Format cat links, Format links) |
||
Line 1: | Line 1: | ||
+ | [[Category:Peerix]] | ||
Two pass assembly in this context means building an OS in two stages. Internally, gcc does this already (gcc + asm + ld). However, when building a large number of binaries that may use shared libraries it is difficult to make them play nicely together. In some cases special hacks have to be made to the makefiles to ensure that shared lib requirements are satisfied prior to linking. | Two pass assembly in this context means building an OS in two stages. Internally, gcc does this already (gcc + asm + ld). However, when building a large number of binaries that may use shared libraries it is difficult to make them play nicely together. In some cases special hacks have to be made to the makefiles to ensure that shared lib requirements are satisfied prior to linking. | ||
Line 24: | Line 25: | ||
The logic that delivers these files must be aware of the success or failure of the previous steps. These files ''depend'' on a bin or a lib being built successfully. | The logic that delivers these files must be aware of the success or failure of the previous steps. These files ''depend'' on a bin or a lib being built successfully. | ||
− | |||
− |
Revision as of 19:09, 1 November 2006
Two pass assembly in this context means building an OS in two stages. Internally, gcc does this already (gcc + asm + ld). However, when building a large number of binaries that may use shared libraries it is difficult to make them play nicely together. In some cases special hacks have to be made to the makefiles to ensure that shared lib requirements are satisfied prior to linking.
I propose to compile and then link across the source packages, rather than one at a time.
Stage one
gcc is used to compile the object files into .o files.
Stage two
ld links them all up in one pass. Because all the .o files and .a files have been built and put into a common directory, linking is more reliable, as everything is in the same place.
Size optimisation
Obviously we want our root image to be as small as possible. In most cases it will be running from a ramdisk, so the size of the image dictates the amount of RAM required. In turn, this dictates the minimum hardware spec of a machine that can use our system.
Object files
We don't want any .a, .sa or .o files in our final image as they are not generally used at runtime, so only the libs, bins and .so files are copied into the image. This process needs to consider these points.
- packages may produce one or many bins, or libs
- we don't always want to use all of the bins or libs that are produced
- in some cases make rules exist to specify partial builds
- we need some rules that specify which resulting files end up in the image
- some files may be required, some only included
Configuration files
Nothing happens without some basic configuration files. In many cases the standard config files provided with a software package are not appropriate for our purposes and are not used. In these cases our own custom or generated config files are used.
Typical examples are inittab, fstab, rc.local or init.d files.
The logic that delivers these files must be aware of the success or failure of the previous steps. These files depend on a bin or a lib being built successfully.