Does an unused member variable take up memory? The Next CEO of Stack OverflowIs it safe to assert(sizeof(A) == sizeof(B)) when A and B are “the same”?What are the differences between a pointer variable and a reference variable in C++?How to determine CPU and memory consumption from inside a process?Why isn't sizeof for a struct equal to the sum of sizeof of each member?What does the explicit keyword mean?How to measure actual memory usage of an application or process?How do I discover memory usage of my application in Android?C++11 introduced a standardized memory model. What does it mean? And how is it going to affect C++ programming?Creating a memory leak with JavaWhy does changing 0.1f to 0 slow down performance by 10x?Common interface pattern for writing struct members to memory?

If the heap is zero-initialized for security, then why is the stack merely uninitialized?

Received an invoice from my ex-employer billing me for training; how to handle?

What flight has the highest ratio of time difference to flight time?

Would a completely good Muggle be able to use a wand?

MessageLevel in QGIS3

What can we do to stop prior company from asking us questions?

Complex fractions

In excess I'm lethal

How long to clear the 'suck zone' of a turbofan after start is initiated?

Preparing Indesign booklet with .psd graphics for print

Return the Closest Prime Number

What's the best way to handle refactoring a big file?

Why is the US ranked as #45 in Press Freedom ratings, despite its extremely permissive free speech laws?

How to invert MapIndexed on a ragged structure? How to construct a tree from rules?

Is it possible to search for a directory/file combination?

Do I need to enable Dev Hub in my PROD Org?

Make solar eclipses exceedingly rare, but still have new moons

Why does the UK parliament need a vote on the political declaration?

Can I run my washing machine drain line into a condensate pump so it drains better?

How do scammers retract money, while you can’t?

Why do variable in an inner function return nan when there is the same variable name at the inner function declared after log

Help understanding this unsettling image of Titan, Epimetheus, and Saturn's rings?

Novel about a guy who is possessed by the divine essence and the world ends?

What happens if you roll doubles 3 times then land on "Go to jail?"



Does an unused member variable take up memory?



The Next CEO of Stack OverflowIs it safe to assert(sizeof(A) == sizeof(B)) when A and B are “the same”?What are the differences between a pointer variable and a reference variable in C++?How to determine CPU and memory consumption from inside a process?Why isn't sizeof for a struct equal to the sum of sizeof of each member?What does the explicit keyword mean?How to measure actual memory usage of an application or process?How do I discover memory usage of my application in Android?C++11 introduced a standardized memory model. What does it mean? And how is it going to affect C++ programming?Creating a memory leak with JavaWhy does changing 0.1f to 0 slow down performance by 10x?Common interface pattern for writing struct members to memory?










79















Does initializing a member variable and not referencing/using it further take up RAM during runtime, or does the compiler simply ignore that variable?



struct Foo 
int var1;
int var2;

Foo() var1 = 5; std::cout << var1;
;


In the example above, the member 'var1' gets a value which is then displayed in the console. 'Var2', however, is not used at all. Therefore writing it to memory during runtime would be a waste of resources. Does the compiler take these kinds of situations into an account and simply ignore unused variables, or is the Foo object always the same size, regardless of whether its members are used?










share|improve this question



















  • 24





    This depends on the compiler, the architecture, the operating system and the optimisation used.

    – Owl
    Mar 8 at 10:04






  • 15





    There is a metric ton of low level driver code out there that specifically add do-nothing struct members for padding to match hardware data frame sizes and as a hack to get desired memory alignment. If a compiler started optimizing these out there would be much breakage.

    – Andy Brown
    Mar 8 at 10:13







  • 2





    @Andy they're not really do-nothing as the address of the following data members is evaluated. This means the existence of those padding members do have an observable behavior on the program. Here, var2 doesn't.

    – YSC
    Mar 8 at 10:19






  • 4





    I would be surprised if the compiler could optimize it away given that any compilation unit addressing such a struct might get linked to another compilation unit using the same struct and the compiler can't know if the separate compilation unit addresses the member or not.

    – Galik
    Mar 8 at 10:33







  • 2





    @geza sizeof(Foo) cannot decrease by definition - if you print sizeof(Foo) it must yield 8 (on common platforms). Compilers can optimize away the space used by var2 (no matter if through new or on the stack or in function calls...) in any context they find it reasonable, even without LTO or whole program optimization. Where that is not possible they won't do it, as with just about any other optimization. I believe the edit to the accepted answer makes it significantly less likely to be misled by it.

    – Max Langhof
    Mar 8 at 13:05
















79















Does initializing a member variable and not referencing/using it further take up RAM during runtime, or does the compiler simply ignore that variable?



struct Foo 
int var1;
int var2;

Foo() var1 = 5; std::cout << var1;
;


In the example above, the member 'var1' gets a value which is then displayed in the console. 'Var2', however, is not used at all. Therefore writing it to memory during runtime would be a waste of resources. Does the compiler take these kinds of situations into an account and simply ignore unused variables, or is the Foo object always the same size, regardless of whether its members are used?










share|improve this question



















  • 24





    This depends on the compiler, the architecture, the operating system and the optimisation used.

    – Owl
    Mar 8 at 10:04






  • 15





    There is a metric ton of low level driver code out there that specifically add do-nothing struct members for padding to match hardware data frame sizes and as a hack to get desired memory alignment. If a compiler started optimizing these out there would be much breakage.

    – Andy Brown
    Mar 8 at 10:13







  • 2





    @Andy they're not really do-nothing as the address of the following data members is evaluated. This means the existence of those padding members do have an observable behavior on the program. Here, var2 doesn't.

    – YSC
    Mar 8 at 10:19






  • 4





    I would be surprised if the compiler could optimize it away given that any compilation unit addressing such a struct might get linked to another compilation unit using the same struct and the compiler can't know if the separate compilation unit addresses the member or not.

    – Galik
    Mar 8 at 10:33







  • 2





    @geza sizeof(Foo) cannot decrease by definition - if you print sizeof(Foo) it must yield 8 (on common platforms). Compilers can optimize away the space used by var2 (no matter if through new or on the stack or in function calls...) in any context they find it reasonable, even without LTO or whole program optimization. Where that is not possible they won't do it, as with just about any other optimization. I believe the edit to the accepted answer makes it significantly less likely to be misled by it.

    – Max Langhof
    Mar 8 at 13:05














79












79








79


10






Does initializing a member variable and not referencing/using it further take up RAM during runtime, or does the compiler simply ignore that variable?



struct Foo 
int var1;
int var2;

Foo() var1 = 5; std::cout << var1;
;


In the example above, the member 'var1' gets a value which is then displayed in the console. 'Var2', however, is not used at all. Therefore writing it to memory during runtime would be a waste of resources. Does the compiler take these kinds of situations into an account and simply ignore unused variables, or is the Foo object always the same size, regardless of whether its members are used?










share|improve this question
















Does initializing a member variable and not referencing/using it further take up RAM during runtime, or does the compiler simply ignore that variable?



struct Foo 
int var1;
int var2;

Foo() var1 = 5; std::cout << var1;
;


In the example above, the member 'var1' gets a value which is then displayed in the console. 'Var2', however, is not used at all. Therefore writing it to memory during runtime would be a waste of resources. Does the compiler take these kinds of situations into an account and simply ignore unused variables, or is the Foo object always the same size, regardless of whether its members are used?







c++ memory struct






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 12 at 14:26









YSC

25.5k557112




25.5k557112










asked Mar 8 at 10:03









Chriss555888Chriss555888

41129




41129







  • 24





    This depends on the compiler, the architecture, the operating system and the optimisation used.

    – Owl
    Mar 8 at 10:04






  • 15





    There is a metric ton of low level driver code out there that specifically add do-nothing struct members for padding to match hardware data frame sizes and as a hack to get desired memory alignment. If a compiler started optimizing these out there would be much breakage.

    – Andy Brown
    Mar 8 at 10:13







  • 2





    @Andy they're not really do-nothing as the address of the following data members is evaluated. This means the existence of those padding members do have an observable behavior on the program. Here, var2 doesn't.

    – YSC
    Mar 8 at 10:19






  • 4





    I would be surprised if the compiler could optimize it away given that any compilation unit addressing such a struct might get linked to another compilation unit using the same struct and the compiler can't know if the separate compilation unit addresses the member or not.

    – Galik
    Mar 8 at 10:33







  • 2





    @geza sizeof(Foo) cannot decrease by definition - if you print sizeof(Foo) it must yield 8 (on common platforms). Compilers can optimize away the space used by var2 (no matter if through new or on the stack or in function calls...) in any context they find it reasonable, even without LTO or whole program optimization. Where that is not possible they won't do it, as with just about any other optimization. I believe the edit to the accepted answer makes it significantly less likely to be misled by it.

    – Max Langhof
    Mar 8 at 13:05













  • 24





    This depends on the compiler, the architecture, the operating system and the optimisation used.

    – Owl
    Mar 8 at 10:04






  • 15





    There is a metric ton of low level driver code out there that specifically add do-nothing struct members for padding to match hardware data frame sizes and as a hack to get desired memory alignment. If a compiler started optimizing these out there would be much breakage.

    – Andy Brown
    Mar 8 at 10:13







  • 2





    @Andy they're not really do-nothing as the address of the following data members is evaluated. This means the existence of those padding members do have an observable behavior on the program. Here, var2 doesn't.

    – YSC
    Mar 8 at 10:19






  • 4





    I would be surprised if the compiler could optimize it away given that any compilation unit addressing such a struct might get linked to another compilation unit using the same struct and the compiler can't know if the separate compilation unit addresses the member or not.

    – Galik
    Mar 8 at 10:33







  • 2





    @geza sizeof(Foo) cannot decrease by definition - if you print sizeof(Foo) it must yield 8 (on common platforms). Compilers can optimize away the space used by var2 (no matter if through new or on the stack or in function calls...) in any context they find it reasonable, even without LTO or whole program optimization. Where that is not possible they won't do it, as with just about any other optimization. I believe the edit to the accepted answer makes it significantly less likely to be misled by it.

    – Max Langhof
    Mar 8 at 13:05








24




24





This depends on the compiler, the architecture, the operating system and the optimisation used.

– Owl
Mar 8 at 10:04





This depends on the compiler, the architecture, the operating system and the optimisation used.

– Owl
Mar 8 at 10:04




15




15





There is a metric ton of low level driver code out there that specifically add do-nothing struct members for padding to match hardware data frame sizes and as a hack to get desired memory alignment. If a compiler started optimizing these out there would be much breakage.

– Andy Brown
Mar 8 at 10:13






There is a metric ton of low level driver code out there that specifically add do-nothing struct members for padding to match hardware data frame sizes and as a hack to get desired memory alignment. If a compiler started optimizing these out there would be much breakage.

– Andy Brown
Mar 8 at 10:13





2




2





@Andy they're not really do-nothing as the address of the following data members is evaluated. This means the existence of those padding members do have an observable behavior on the program. Here, var2 doesn't.

– YSC
Mar 8 at 10:19





@Andy they're not really do-nothing as the address of the following data members is evaluated. This means the existence of those padding members do have an observable behavior on the program. Here, var2 doesn't.

– YSC
Mar 8 at 10:19




4




4





I would be surprised if the compiler could optimize it away given that any compilation unit addressing such a struct might get linked to another compilation unit using the same struct and the compiler can't know if the separate compilation unit addresses the member or not.

– Galik
Mar 8 at 10:33






I would be surprised if the compiler could optimize it away given that any compilation unit addressing such a struct might get linked to another compilation unit using the same struct and the compiler can't know if the separate compilation unit addresses the member or not.

– Galik
Mar 8 at 10:33





2




2





@geza sizeof(Foo) cannot decrease by definition - if you print sizeof(Foo) it must yield 8 (on common platforms). Compilers can optimize away the space used by var2 (no matter if through new or on the stack or in function calls...) in any context they find it reasonable, even without LTO or whole program optimization. Where that is not possible they won't do it, as with just about any other optimization. I believe the edit to the accepted answer makes it significantly less likely to be misled by it.

– Max Langhof
Mar 8 at 13:05






@geza sizeof(Foo) cannot decrease by definition - if you print sizeof(Foo) it must yield 8 (on common platforms). Compilers can optimize away the space used by var2 (no matter if through new or on the stack or in function calls...) in any context they find it reasonable, even without LTO or whole program optimization. Where that is not possible they won't do it, as with just about any other optimization. I believe the edit to the accepted answer makes it significantly less likely to be misled by it.

– Max Langhof
Mar 8 at 13:05













6 Answers
6






active

oldest

votes


















94














The golden C++ "as-if" rule1 states that, if the observable behavior of a program doesn't depend on an unused data-member existence, the compiler is allowed to optimized it away.




Does an unused member variable take up memory?




No (if it is "really" unused).




Now comes two questions in mind:



  1. When would the observable behavior not depend on a member existence?

  2. Does that kind of situations occurs in real life programs?

Let's start with an example.



Example



#include <iostream>

struct Foo1
int var1 = 5; Foo1() std::cout << var1; ;

struct Foo2
int var1 = 5; int var2; Foo2() std::cout << var1; ;

void f1() (void) Foo1;
void f2() (void) Foo2;


If we ask gcc to compile this translation unit, it outputs:



f1():
mov esi, 5
mov edi, OFFSET FLAT:_ZSt4cout
jmp std::basic_ostream<char, std::char_traits<char> >::operator<<(int)
f2():
jmp f1()


f2 is the same as f1, and no memory is ever used to hold an actual Foo2::var2. (Clang does something similar).



Discussion



Some may say this is different for two reasons:



  1. this is too trivial an example,

  2. the struct is entirely optimized, it doesn't count.

Well, a good program is a smart and complex assembly of simple things rather than a simple juxtaposition of complex things. In real life, you write tons of simple functions using simple structures than the compiler optimizes away. For instance:



bool insert(std::set<int>& set, int value)

return set.insert(value).second;



This is a genuine example of a data-member (here, std::pair<std::set<int>::iterator, bool>::first) being unused. Guess what? It is optimized away (simpler example with a dummy set if that assembly makes you cry).



Now would be the perfect time to read the excellent answer of Max Langhof (upvote it for me please). It explains why, in the end, the concept of structure doesn't make sense at the assembly level the compiler outputs.



"But, if I do X, the fact that the unused member is optimized away is a problem!"



There have been a number of comments arguing this answer must be wrong because some operation (like assert(sizeof(Foo2) == 2*sizeof(int))) would break something.



If X is part of the observable behavior of the program2, the compiler is not allowed to optimized things away. There are a lot of operations on an object containing an "unused" data-member which would have an observable effect on the program. If such an operation is performed or if the compiler cannot prove none is performed, that "unused" data-member is part of the observable behavior of the program and cannot be optimized away.



Operations that affect the observable behavior include, but are not limited to:



  • taking the size of a type of object (sizeof(Foo)),

  • taking the address of a data member declared after the "unused" one,

  • copying the object with a function like memcpy,

  • manipulating the representation of the object (like with memcmp),

  • qualifying an object as volatile,


  • etc.


1)




[intro.abstract]/1



The semantic descriptions in this document define a parameterized nondeterministic abstract machine. This document places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below.




2) Like an assert passing or failing is.






share|improve this answer

























  • Comments suggesting improvements to the answer have been archived in chat.

    – Cody Gray
    Mar 12 at 16:55



















59














It's important to realize that the code the compiler produces has no actual knowledge of your data structures (because such a thing doesn't exist on assembly level), and neither does the optimizer. The compiler only produces code for each function, not data structures.



Ok, it also writes constant data sections and such.



Based on that, we can already say that the optimizer won't "remove" or "eliminate" members, because it doesn't output data structures. It outputs code, which may or may not use the members, and among its goals is saving memory or cycles by eliminating pointless uses (i.e. writes/reads) of the members.




The gist of it is that "if the compiler can prove within the scope of a function (including functions that were inlined into it) that the unused member makes no difference for how the function operates (and what it returns) then chances are good that the presence of the member causes no overhead".



As you make the interactions of a function with the outside world more complicated/unclear to the compiler (take/return more complex data structures, e.g. a std::vector<Foo>, hide the definition of a function in a different compilation unit, forbid/disincentivize inlining etc.), it becomes more and more likely that the compiler cannot prove that the unused member has no effect.



There are no hard rules here because it all depends on the optimizations the compiler makes, but as long as you do trivial things (such as shown in YSC's answer) it's very likely that no overhead will be present, whereas doing complicated things (e.g. returning a std::vector<Foo> from a function too large for inlining) will probably incur the overhead.




To illustrate the point, consider this example:



struct Foo 
int var1 = 3;
int var2 = 4;
int var3 = 5;
;

int test()

Foo foo;
std::array<char, sizeof(Foo)> arr;
std::memcpy(&arr, &foo, sizeof(Foo));
return arr[0] + arr[4];



We do non-trivial things here (take addresses, inspect and add bytes from the byte representation) and yet the optimizer can figure out that the result is always the same on this platform:



test(): # @test()
mov eax, 7
ret


Not only did the members of Foo not occupy any memory, a Foo didn't even come into existence! If there are other usages that can't be optimized then e.g. sizeof(Foo) might matter - but only for that segment of code! If all usages could be optimized like this then the existence of e.g. var3 does not influence the generated code. But even if it is used somewhere else, test() would remain optimized!



In short: Each usage of Foo is optimized independently. Some may use more memory because of an unneeded member, some may not. Consult your compiler manual for more details.






share|improve this answer




















  • 4





    Mic drop "Consult your compiler manual for more details." :D

    – YSC
    Mar 12 at 14:31


















20














The compiler will only optimise away an unused member variable (especially a public one) if it can prove that removing the variable has no side effects and that no part of the program depends on the size of Foo being the same.



I don't think any current compiler performs such optimisations unless the structure isn't really being used at all. Some compilers may at least warn about unused private variables but not usually for public ones.






share|improve this answer




















  • 1





    And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.

    – YSC
    Mar 8 at 10:08












  • @YSC clang++ does warn about unused data members and variables.

    – Maxim Egorushkin
    Mar 8 at 10:10







  • 3





    @YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly

    – Alan Birtles
    Mar 8 at 10:11







  • 4





    @AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.

    – YSC
    Mar 8 at 10:16






  • 2





    @YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away

    – Alan Birtles
    Mar 8 at 10:21


















6














In general, you have to assume that you get what you have asked for, for example, the "unused" member variables are there.



Since in your example both members are public, the compiler cannot know if some code (particularly from other translation units = other *.cpp files, which are compiled separately and then linked) would access the "unused" member.



The answer of YSC gives a very simple example, where the class type is only used as a variable of automatic storage duration and where no pointer to that variable is taken. There, the compiler can inline all the code and can then eliminate all the dead code.



If you have interfaces between functions defined in different translation units, typically the compiler does not know anything. The interfaces follow typically some predefined ABI (like that) such that different object files can be linked together without any problems. Typically ABIs do not make a difference if a member is used or not. So, in such cases, the second member has to be physically in the memory (unless eliminated out later by the linker).



And as long as you are within the boundaries of the language, you cannot observe that any elimination happens. If you call sizeof(Foo), you will get 2*sizeof(int). If you create an array of Foos, the distance between the beginnings of two consecutive objects of Foo is always sizeof(Foo) bytes.



Your type is a standard layout type, which means that you can also access on members based on compile-time computed offsets (cf. the offsetof macro). Moreover, you can inspect the byte-by-byte representation of the object by copying onto an array of char using std::memcpy. In all these cases, the second member can be observed to be there.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Cody Gray
    Mar 8 at 18:09






  • 2





    +1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, . gcc -fwhole-program -O3 *.c could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact value sizeof() has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)

    – Peter Cordes
    Mar 8 at 22:58



















5














The examples provided by other answers to this question which elide var2 are based on a single optimization technique: constant propagation, and subsequent elision of the whole structure (not the elision of just var2). This is the simple case, and optimizing compilers do implement it.



For unmanaged C/C++ codes the answer is that the compiler will in general not elide var2. As far as I know there is no support for such a C/C++ struct transformation in debugging information, and if the struct is accessible as a variable in a debugger then var2 cannot be elided. As far as I know no current C/C++ compiler can specialize functions according to elision of var2, so if the struct is passed to or returned from a non-inlined function then var2 cannot be elided.



For managed languages such as C#/Java with a JIT compiler the compiler might be able to safely elide var2 because it can precisely track if it is being used and whether it escapes to unmanaged code. The physical size of the struct in managed languages can be different from its size reported to the programmer.



Year 2019 C/C++ compilers cannot elide var2 from the struct unless the whole struct variable is elided. For interesting cases of elision of var2 from the struct, the answer is: No.



Some future C/C++ compilers will be able to elide var2 from the struct, and the ecosystem built around the compilers will need to adapt to process elision information generated by compilers.






share|improve this answer























  • Your paragraph about debug information boils down to "we can't optimize it away if that would make debugging harder", which is just plain wrong. Or I'm misreading. Could you clarify?

    – Max Langhof
    Mar 11 at 8:22











  • If the compiler emits debug information about the struct then it cannot elide var2. Options are: (1) Do not emit the debug information if does not correspond to the physical representation of the struct, (2) Support struct member elision in debug information and emit the debug information

    – atomsymbol
    Mar 11 at 11:47


















4














It's dependent on your compiler and its optimization level.



In gcc, if you specify -O, it will turn on the following optimization flags:



-fauto-inc-dec 
-fbranch-count-reg
-fcombine-stack-adjustments
-fcompare-elim
-fcprop-registers
-fdce
-fdefer-pop
...


-fdce stands for Dead Code Elimination.



You can use __attribute__((used)) to prevent gcc eliminating an unused variable with static storage:




This attribute, attached to a variable with static storage, means that
the variable must be emitted even if it appears that the variable is
not referenced.



When applied to a static data member of a C++ class template, the
attribute also means that the member is instantiated if the class
itself is instantiated.







share|improve this answer

























  • That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.

    – Peter Cordes
    Mar 8 at 23:00












Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55060820%2fdoes-an-unused-member-variable-take-up-memory%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























6 Answers
6






active

oldest

votes








6 Answers
6






active

oldest

votes









active

oldest

votes






active

oldest

votes









94














The golden C++ "as-if" rule1 states that, if the observable behavior of a program doesn't depend on an unused data-member existence, the compiler is allowed to optimized it away.




Does an unused member variable take up memory?




No (if it is "really" unused).




Now comes two questions in mind:



  1. When would the observable behavior not depend on a member existence?

  2. Does that kind of situations occurs in real life programs?

Let's start with an example.



Example



#include <iostream>

struct Foo1
int var1 = 5; Foo1() std::cout << var1; ;

struct Foo2
int var1 = 5; int var2; Foo2() std::cout << var1; ;

void f1() (void) Foo1;
void f2() (void) Foo2;


If we ask gcc to compile this translation unit, it outputs:



f1():
mov esi, 5
mov edi, OFFSET FLAT:_ZSt4cout
jmp std::basic_ostream<char, std::char_traits<char> >::operator<<(int)
f2():
jmp f1()


f2 is the same as f1, and no memory is ever used to hold an actual Foo2::var2. (Clang does something similar).



Discussion



Some may say this is different for two reasons:



  1. this is too trivial an example,

  2. the struct is entirely optimized, it doesn't count.

Well, a good program is a smart and complex assembly of simple things rather than a simple juxtaposition of complex things. In real life, you write tons of simple functions using simple structures than the compiler optimizes away. For instance:



bool insert(std::set<int>& set, int value)

return set.insert(value).second;



This is a genuine example of a data-member (here, std::pair<std::set<int>::iterator, bool>::first) being unused. Guess what? It is optimized away (simpler example with a dummy set if that assembly makes you cry).



Now would be the perfect time to read the excellent answer of Max Langhof (upvote it for me please). It explains why, in the end, the concept of structure doesn't make sense at the assembly level the compiler outputs.



"But, if I do X, the fact that the unused member is optimized away is a problem!"



There have been a number of comments arguing this answer must be wrong because some operation (like assert(sizeof(Foo2) == 2*sizeof(int))) would break something.



If X is part of the observable behavior of the program2, the compiler is not allowed to optimized things away. There are a lot of operations on an object containing an "unused" data-member which would have an observable effect on the program. If such an operation is performed or if the compiler cannot prove none is performed, that "unused" data-member is part of the observable behavior of the program and cannot be optimized away.



Operations that affect the observable behavior include, but are not limited to:



  • taking the size of a type of object (sizeof(Foo)),

  • taking the address of a data member declared after the "unused" one,

  • copying the object with a function like memcpy,

  • manipulating the representation of the object (like with memcmp),

  • qualifying an object as volatile,


  • etc.


1)




[intro.abstract]/1



The semantic descriptions in this document define a parameterized nondeterministic abstract machine. This document places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below.




2) Like an assert passing or failing is.






share|improve this answer

























  • Comments suggesting improvements to the answer have been archived in chat.

    – Cody Gray
    Mar 12 at 16:55
















94














The golden C++ "as-if" rule1 states that, if the observable behavior of a program doesn't depend on an unused data-member existence, the compiler is allowed to optimized it away.




Does an unused member variable take up memory?




No (if it is "really" unused).




Now comes two questions in mind:



  1. When would the observable behavior not depend on a member existence?

  2. Does that kind of situations occurs in real life programs?

Let's start with an example.



Example



#include <iostream>

struct Foo1
int var1 = 5; Foo1() std::cout << var1; ;

struct Foo2
int var1 = 5; int var2; Foo2() std::cout << var1; ;

void f1() (void) Foo1;
void f2() (void) Foo2;


If we ask gcc to compile this translation unit, it outputs:



f1():
mov esi, 5
mov edi, OFFSET FLAT:_ZSt4cout
jmp std::basic_ostream<char, std::char_traits<char> >::operator<<(int)
f2():
jmp f1()


f2 is the same as f1, and no memory is ever used to hold an actual Foo2::var2. (Clang does something similar).



Discussion



Some may say this is different for two reasons:



  1. this is too trivial an example,

  2. the struct is entirely optimized, it doesn't count.

Well, a good program is a smart and complex assembly of simple things rather than a simple juxtaposition of complex things. In real life, you write tons of simple functions using simple structures than the compiler optimizes away. For instance:



bool insert(std::set<int>& set, int value)

return set.insert(value).second;



This is a genuine example of a data-member (here, std::pair<std::set<int>::iterator, bool>::first) being unused. Guess what? It is optimized away (simpler example with a dummy set if that assembly makes you cry).



Now would be the perfect time to read the excellent answer of Max Langhof (upvote it for me please). It explains why, in the end, the concept of structure doesn't make sense at the assembly level the compiler outputs.



"But, if I do X, the fact that the unused member is optimized away is a problem!"



There have been a number of comments arguing this answer must be wrong because some operation (like assert(sizeof(Foo2) == 2*sizeof(int))) would break something.



If X is part of the observable behavior of the program2, the compiler is not allowed to optimized things away. There are a lot of operations on an object containing an "unused" data-member which would have an observable effect on the program. If such an operation is performed or if the compiler cannot prove none is performed, that "unused" data-member is part of the observable behavior of the program and cannot be optimized away.



Operations that affect the observable behavior include, but are not limited to:



  • taking the size of a type of object (sizeof(Foo)),

  • taking the address of a data member declared after the "unused" one,

  • copying the object with a function like memcpy,

  • manipulating the representation of the object (like with memcmp),

  • qualifying an object as volatile,


  • etc.


1)




[intro.abstract]/1



The semantic descriptions in this document define a parameterized nondeterministic abstract machine. This document places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below.




2) Like an assert passing or failing is.






share|improve this answer

























  • Comments suggesting improvements to the answer have been archived in chat.

    – Cody Gray
    Mar 12 at 16:55














94












94








94







The golden C++ "as-if" rule1 states that, if the observable behavior of a program doesn't depend on an unused data-member existence, the compiler is allowed to optimized it away.




Does an unused member variable take up memory?




No (if it is "really" unused).




Now comes two questions in mind:



  1. When would the observable behavior not depend on a member existence?

  2. Does that kind of situations occurs in real life programs?

Let's start with an example.



Example



#include <iostream>

struct Foo1
int var1 = 5; Foo1() std::cout << var1; ;

struct Foo2
int var1 = 5; int var2; Foo2() std::cout << var1; ;

void f1() (void) Foo1;
void f2() (void) Foo2;


If we ask gcc to compile this translation unit, it outputs:



f1():
mov esi, 5
mov edi, OFFSET FLAT:_ZSt4cout
jmp std::basic_ostream<char, std::char_traits<char> >::operator<<(int)
f2():
jmp f1()


f2 is the same as f1, and no memory is ever used to hold an actual Foo2::var2. (Clang does something similar).



Discussion



Some may say this is different for two reasons:



  1. this is too trivial an example,

  2. the struct is entirely optimized, it doesn't count.

Well, a good program is a smart and complex assembly of simple things rather than a simple juxtaposition of complex things. In real life, you write tons of simple functions using simple structures than the compiler optimizes away. For instance:



bool insert(std::set<int>& set, int value)

return set.insert(value).second;



This is a genuine example of a data-member (here, std::pair<std::set<int>::iterator, bool>::first) being unused. Guess what? It is optimized away (simpler example with a dummy set if that assembly makes you cry).



Now would be the perfect time to read the excellent answer of Max Langhof (upvote it for me please). It explains why, in the end, the concept of structure doesn't make sense at the assembly level the compiler outputs.



"But, if I do X, the fact that the unused member is optimized away is a problem!"



There have been a number of comments arguing this answer must be wrong because some operation (like assert(sizeof(Foo2) == 2*sizeof(int))) would break something.



If X is part of the observable behavior of the program2, the compiler is not allowed to optimized things away. There are a lot of operations on an object containing an "unused" data-member which would have an observable effect on the program. If such an operation is performed or if the compiler cannot prove none is performed, that "unused" data-member is part of the observable behavior of the program and cannot be optimized away.



Operations that affect the observable behavior include, but are not limited to:



  • taking the size of a type of object (sizeof(Foo)),

  • taking the address of a data member declared after the "unused" one,

  • copying the object with a function like memcpy,

  • manipulating the representation of the object (like with memcmp),

  • qualifying an object as volatile,


  • etc.


1)




[intro.abstract]/1



The semantic descriptions in this document define a parameterized nondeterministic abstract machine. This document places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below.




2) Like an assert passing or failing is.






share|improve this answer















The golden C++ "as-if" rule1 states that, if the observable behavior of a program doesn't depend on an unused data-member existence, the compiler is allowed to optimized it away.




Does an unused member variable take up memory?




No (if it is "really" unused).




Now comes two questions in mind:



  1. When would the observable behavior not depend on a member existence?

  2. Does that kind of situations occurs in real life programs?

Let's start with an example.



Example



#include <iostream>

struct Foo1
int var1 = 5; Foo1() std::cout << var1; ;

struct Foo2
int var1 = 5; int var2; Foo2() std::cout << var1; ;

void f1() (void) Foo1;
void f2() (void) Foo2;


If we ask gcc to compile this translation unit, it outputs:



f1():
mov esi, 5
mov edi, OFFSET FLAT:_ZSt4cout
jmp std::basic_ostream<char, std::char_traits<char> >::operator<<(int)
f2():
jmp f1()


f2 is the same as f1, and no memory is ever used to hold an actual Foo2::var2. (Clang does something similar).



Discussion



Some may say this is different for two reasons:



  1. this is too trivial an example,

  2. the struct is entirely optimized, it doesn't count.

Well, a good program is a smart and complex assembly of simple things rather than a simple juxtaposition of complex things. In real life, you write tons of simple functions using simple structures than the compiler optimizes away. For instance:



bool insert(std::set<int>& set, int value)

return set.insert(value).second;



This is a genuine example of a data-member (here, std::pair<std::set<int>::iterator, bool>::first) being unused. Guess what? It is optimized away (simpler example with a dummy set if that assembly makes you cry).



Now would be the perfect time to read the excellent answer of Max Langhof (upvote it for me please). It explains why, in the end, the concept of structure doesn't make sense at the assembly level the compiler outputs.



"But, if I do X, the fact that the unused member is optimized away is a problem!"



There have been a number of comments arguing this answer must be wrong because some operation (like assert(sizeof(Foo2) == 2*sizeof(int))) would break something.



If X is part of the observable behavior of the program2, the compiler is not allowed to optimized things away. There are a lot of operations on an object containing an "unused" data-member which would have an observable effect on the program. If such an operation is performed or if the compiler cannot prove none is performed, that "unused" data-member is part of the observable behavior of the program and cannot be optimized away.



Operations that affect the observable behavior include, but are not limited to:



  • taking the size of a type of object (sizeof(Foo)),

  • taking the address of a data member declared after the "unused" one,

  • copying the object with a function like memcpy,

  • manipulating the representation of the object (like with memcmp),

  • qualifying an object as volatile,


  • etc.


1)




[intro.abstract]/1



The semantic descriptions in this document define a parameterized nondeterministic abstract machine. This document places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below.




2) Like an assert passing or failing is.







share|improve this answer














share|improve this answer



share|improve this answer








edited Mar 13 at 14:42









s3cur3

8781824




8781824










answered Mar 8 at 10:12









YSCYSC

25.5k557112




25.5k557112












  • Comments suggesting improvements to the answer have been archived in chat.

    – Cody Gray
    Mar 12 at 16:55


















  • Comments suggesting improvements to the answer have been archived in chat.

    – Cody Gray
    Mar 12 at 16:55

















Comments suggesting improvements to the answer have been archived in chat.

– Cody Gray
Mar 12 at 16:55






Comments suggesting improvements to the answer have been archived in chat.

– Cody Gray
Mar 12 at 16:55














59














It's important to realize that the code the compiler produces has no actual knowledge of your data structures (because such a thing doesn't exist on assembly level), and neither does the optimizer. The compiler only produces code for each function, not data structures.



Ok, it also writes constant data sections and such.



Based on that, we can already say that the optimizer won't "remove" or "eliminate" members, because it doesn't output data structures. It outputs code, which may or may not use the members, and among its goals is saving memory or cycles by eliminating pointless uses (i.e. writes/reads) of the members.




The gist of it is that "if the compiler can prove within the scope of a function (including functions that were inlined into it) that the unused member makes no difference for how the function operates (and what it returns) then chances are good that the presence of the member causes no overhead".



As you make the interactions of a function with the outside world more complicated/unclear to the compiler (take/return more complex data structures, e.g. a std::vector<Foo>, hide the definition of a function in a different compilation unit, forbid/disincentivize inlining etc.), it becomes more and more likely that the compiler cannot prove that the unused member has no effect.



There are no hard rules here because it all depends on the optimizations the compiler makes, but as long as you do trivial things (such as shown in YSC's answer) it's very likely that no overhead will be present, whereas doing complicated things (e.g. returning a std::vector<Foo> from a function too large for inlining) will probably incur the overhead.




To illustrate the point, consider this example:



struct Foo 
int var1 = 3;
int var2 = 4;
int var3 = 5;
;

int test()

Foo foo;
std::array<char, sizeof(Foo)> arr;
std::memcpy(&arr, &foo, sizeof(Foo));
return arr[0] + arr[4];



We do non-trivial things here (take addresses, inspect and add bytes from the byte representation) and yet the optimizer can figure out that the result is always the same on this platform:



test(): # @test()
mov eax, 7
ret


Not only did the members of Foo not occupy any memory, a Foo didn't even come into existence! If there are other usages that can't be optimized then e.g. sizeof(Foo) might matter - but only for that segment of code! If all usages could be optimized like this then the existence of e.g. var3 does not influence the generated code. But even if it is used somewhere else, test() would remain optimized!



In short: Each usage of Foo is optimized independently. Some may use more memory because of an unneeded member, some may not. Consult your compiler manual for more details.






share|improve this answer




















  • 4





    Mic drop "Consult your compiler manual for more details." :D

    – YSC
    Mar 12 at 14:31















59














It's important to realize that the code the compiler produces has no actual knowledge of your data structures (because such a thing doesn't exist on assembly level), and neither does the optimizer. The compiler only produces code for each function, not data structures.



Ok, it also writes constant data sections and such.



Based on that, we can already say that the optimizer won't "remove" or "eliminate" members, because it doesn't output data structures. It outputs code, which may or may not use the members, and among its goals is saving memory or cycles by eliminating pointless uses (i.e. writes/reads) of the members.




The gist of it is that "if the compiler can prove within the scope of a function (including functions that were inlined into it) that the unused member makes no difference for how the function operates (and what it returns) then chances are good that the presence of the member causes no overhead".



As you make the interactions of a function with the outside world more complicated/unclear to the compiler (take/return more complex data structures, e.g. a std::vector<Foo>, hide the definition of a function in a different compilation unit, forbid/disincentivize inlining etc.), it becomes more and more likely that the compiler cannot prove that the unused member has no effect.



There are no hard rules here because it all depends on the optimizations the compiler makes, but as long as you do trivial things (such as shown in YSC's answer) it's very likely that no overhead will be present, whereas doing complicated things (e.g. returning a std::vector<Foo> from a function too large for inlining) will probably incur the overhead.




To illustrate the point, consider this example:



struct Foo 
int var1 = 3;
int var2 = 4;
int var3 = 5;
;

int test()

Foo foo;
std::array<char, sizeof(Foo)> arr;
std::memcpy(&arr, &foo, sizeof(Foo));
return arr[0] + arr[4];



We do non-trivial things here (take addresses, inspect and add bytes from the byte representation) and yet the optimizer can figure out that the result is always the same on this platform:



test(): # @test()
mov eax, 7
ret


Not only did the members of Foo not occupy any memory, a Foo didn't even come into existence! If there are other usages that can't be optimized then e.g. sizeof(Foo) might matter - but only for that segment of code! If all usages could be optimized like this then the existence of e.g. var3 does not influence the generated code. But even if it is used somewhere else, test() would remain optimized!



In short: Each usage of Foo is optimized independently. Some may use more memory because of an unneeded member, some may not. Consult your compiler manual for more details.






share|improve this answer




















  • 4





    Mic drop "Consult your compiler manual for more details." :D

    – YSC
    Mar 12 at 14:31













59












59








59







It's important to realize that the code the compiler produces has no actual knowledge of your data structures (because such a thing doesn't exist on assembly level), and neither does the optimizer. The compiler only produces code for each function, not data structures.



Ok, it also writes constant data sections and such.



Based on that, we can already say that the optimizer won't "remove" or "eliminate" members, because it doesn't output data structures. It outputs code, which may or may not use the members, and among its goals is saving memory or cycles by eliminating pointless uses (i.e. writes/reads) of the members.




The gist of it is that "if the compiler can prove within the scope of a function (including functions that were inlined into it) that the unused member makes no difference for how the function operates (and what it returns) then chances are good that the presence of the member causes no overhead".



As you make the interactions of a function with the outside world more complicated/unclear to the compiler (take/return more complex data structures, e.g. a std::vector<Foo>, hide the definition of a function in a different compilation unit, forbid/disincentivize inlining etc.), it becomes more and more likely that the compiler cannot prove that the unused member has no effect.



There are no hard rules here because it all depends on the optimizations the compiler makes, but as long as you do trivial things (such as shown in YSC's answer) it's very likely that no overhead will be present, whereas doing complicated things (e.g. returning a std::vector<Foo> from a function too large for inlining) will probably incur the overhead.




To illustrate the point, consider this example:



struct Foo 
int var1 = 3;
int var2 = 4;
int var3 = 5;
;

int test()

Foo foo;
std::array<char, sizeof(Foo)> arr;
std::memcpy(&arr, &foo, sizeof(Foo));
return arr[0] + arr[4];



We do non-trivial things here (take addresses, inspect and add bytes from the byte representation) and yet the optimizer can figure out that the result is always the same on this platform:



test(): # @test()
mov eax, 7
ret


Not only did the members of Foo not occupy any memory, a Foo didn't even come into existence! If there are other usages that can't be optimized then e.g. sizeof(Foo) might matter - but only for that segment of code! If all usages could be optimized like this then the existence of e.g. var3 does not influence the generated code. But even if it is used somewhere else, test() would remain optimized!



In short: Each usage of Foo is optimized independently. Some may use more memory because of an unneeded member, some may not. Consult your compiler manual for more details.






share|improve this answer















It's important to realize that the code the compiler produces has no actual knowledge of your data structures (because such a thing doesn't exist on assembly level), and neither does the optimizer. The compiler only produces code for each function, not data structures.



Ok, it also writes constant data sections and such.



Based on that, we can already say that the optimizer won't "remove" or "eliminate" members, because it doesn't output data structures. It outputs code, which may or may not use the members, and among its goals is saving memory or cycles by eliminating pointless uses (i.e. writes/reads) of the members.




The gist of it is that "if the compiler can prove within the scope of a function (including functions that were inlined into it) that the unused member makes no difference for how the function operates (and what it returns) then chances are good that the presence of the member causes no overhead".



As you make the interactions of a function with the outside world more complicated/unclear to the compiler (take/return more complex data structures, e.g. a std::vector<Foo>, hide the definition of a function in a different compilation unit, forbid/disincentivize inlining etc.), it becomes more and more likely that the compiler cannot prove that the unused member has no effect.



There are no hard rules here because it all depends on the optimizations the compiler makes, but as long as you do trivial things (such as shown in YSC's answer) it's very likely that no overhead will be present, whereas doing complicated things (e.g. returning a std::vector<Foo> from a function too large for inlining) will probably incur the overhead.




To illustrate the point, consider this example:



struct Foo 
int var1 = 3;
int var2 = 4;
int var3 = 5;
;

int test()

Foo foo;
std::array<char, sizeof(Foo)> arr;
std::memcpy(&arr, &foo, sizeof(Foo));
return arr[0] + arr[4];



We do non-trivial things here (take addresses, inspect and add bytes from the byte representation) and yet the optimizer can figure out that the result is always the same on this platform:



test(): # @test()
mov eax, 7
ret


Not only did the members of Foo not occupy any memory, a Foo didn't even come into existence! If there are other usages that can't be optimized then e.g. sizeof(Foo) might matter - but only for that segment of code! If all usages could be optimized like this then the existence of e.g. var3 does not influence the generated code. But even if it is used somewhere else, test() would remain optimized!



In short: Each usage of Foo is optimized independently. Some may use more memory because of an unneeded member, some may not. Consult your compiler manual for more details.







share|improve this answer














share|improve this answer



share|improve this answer








edited Mar 8 at 15:26

























answered Mar 8 at 10:24









Max LanghofMax Langhof

11.9k22241




11.9k22241







  • 4





    Mic drop "Consult your compiler manual for more details." :D

    – YSC
    Mar 12 at 14:31












  • 4





    Mic drop "Consult your compiler manual for more details." :D

    – YSC
    Mar 12 at 14:31







4




4





Mic drop "Consult your compiler manual for more details." :D

– YSC
Mar 12 at 14:31





Mic drop "Consult your compiler manual for more details." :D

– YSC
Mar 12 at 14:31











20














The compiler will only optimise away an unused member variable (especially a public one) if it can prove that removing the variable has no side effects and that no part of the program depends on the size of Foo being the same.



I don't think any current compiler performs such optimisations unless the structure isn't really being used at all. Some compilers may at least warn about unused private variables but not usually for public ones.






share|improve this answer




















  • 1





    And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.

    – YSC
    Mar 8 at 10:08












  • @YSC clang++ does warn about unused data members and variables.

    – Maxim Egorushkin
    Mar 8 at 10:10







  • 3





    @YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly

    – Alan Birtles
    Mar 8 at 10:11







  • 4





    @AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.

    – YSC
    Mar 8 at 10:16






  • 2





    @YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away

    – Alan Birtles
    Mar 8 at 10:21















20














The compiler will only optimise away an unused member variable (especially a public one) if it can prove that removing the variable has no side effects and that no part of the program depends on the size of Foo being the same.



I don't think any current compiler performs such optimisations unless the structure isn't really being used at all. Some compilers may at least warn about unused private variables but not usually for public ones.






share|improve this answer




















  • 1





    And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.

    – YSC
    Mar 8 at 10:08












  • @YSC clang++ does warn about unused data members and variables.

    – Maxim Egorushkin
    Mar 8 at 10:10







  • 3





    @YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly

    – Alan Birtles
    Mar 8 at 10:11







  • 4





    @AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.

    – YSC
    Mar 8 at 10:16






  • 2





    @YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away

    – Alan Birtles
    Mar 8 at 10:21













20












20








20







The compiler will only optimise away an unused member variable (especially a public one) if it can prove that removing the variable has no side effects and that no part of the program depends on the size of Foo being the same.



I don't think any current compiler performs such optimisations unless the structure isn't really being used at all. Some compilers may at least warn about unused private variables but not usually for public ones.






share|improve this answer















The compiler will only optimise away an unused member variable (especially a public one) if it can prove that removing the variable has no side effects and that no part of the program depends on the size of Foo being the same.



I don't think any current compiler performs such optimisations unless the structure isn't really being used at all. Some compilers may at least warn about unused private variables but not usually for public ones.







share|improve this answer














share|improve this answer



share|improve this answer








edited Mar 13 at 7:33

























answered Mar 8 at 10:06









Alan BirtlesAlan Birtles

9,78611135




9,78611135







  • 1





    And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.

    – YSC
    Mar 8 at 10:08












  • @YSC clang++ does warn about unused data members and variables.

    – Maxim Egorushkin
    Mar 8 at 10:10







  • 3





    @YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly

    – Alan Birtles
    Mar 8 at 10:11







  • 4





    @AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.

    – YSC
    Mar 8 at 10:16






  • 2





    @YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away

    – Alan Birtles
    Mar 8 at 10:21












  • 1





    And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.

    – YSC
    Mar 8 at 10:08












  • @YSC clang++ does warn about unused data members and variables.

    – Maxim Egorushkin
    Mar 8 at 10:10







  • 3





    @YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly

    – Alan Birtles
    Mar 8 at 10:11







  • 4





    @AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.

    – YSC
    Mar 8 at 10:16






  • 2





    @YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away

    – Alan Birtles
    Mar 8 at 10:21







1




1





And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.

– YSC
Mar 8 at 10:08






And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.

– YSC
Mar 8 at 10:08














@YSC clang++ does warn about unused data members and variables.

– Maxim Egorushkin
Mar 8 at 10:10






@YSC clang++ does warn about unused data members and variables.

– Maxim Egorushkin
Mar 8 at 10:10





3




3





@YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly

– Alan Birtles
Mar 8 at 10:11






@YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly

– Alan Birtles
Mar 8 at 10:11





4




4





@AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.

– YSC
Mar 8 at 10:16





@AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.

– YSC
Mar 8 at 10:16




2




2





@YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away

– Alan Birtles
Mar 8 at 10:21





@YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away

– Alan Birtles
Mar 8 at 10:21











6














In general, you have to assume that you get what you have asked for, for example, the "unused" member variables are there.



Since in your example both members are public, the compiler cannot know if some code (particularly from other translation units = other *.cpp files, which are compiled separately and then linked) would access the "unused" member.



The answer of YSC gives a very simple example, where the class type is only used as a variable of automatic storage duration and where no pointer to that variable is taken. There, the compiler can inline all the code and can then eliminate all the dead code.



If you have interfaces between functions defined in different translation units, typically the compiler does not know anything. The interfaces follow typically some predefined ABI (like that) such that different object files can be linked together without any problems. Typically ABIs do not make a difference if a member is used or not. So, in such cases, the second member has to be physically in the memory (unless eliminated out later by the linker).



And as long as you are within the boundaries of the language, you cannot observe that any elimination happens. If you call sizeof(Foo), you will get 2*sizeof(int). If you create an array of Foos, the distance between the beginnings of two consecutive objects of Foo is always sizeof(Foo) bytes.



Your type is a standard layout type, which means that you can also access on members based on compile-time computed offsets (cf. the offsetof macro). Moreover, you can inspect the byte-by-byte representation of the object by copying onto an array of char using std::memcpy. In all these cases, the second member can be observed to be there.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Cody Gray
    Mar 8 at 18:09






  • 2





    +1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, . gcc -fwhole-program -O3 *.c could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact value sizeof() has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)

    – Peter Cordes
    Mar 8 at 22:58
















6














In general, you have to assume that you get what you have asked for, for example, the "unused" member variables are there.



Since in your example both members are public, the compiler cannot know if some code (particularly from other translation units = other *.cpp files, which are compiled separately and then linked) would access the "unused" member.



The answer of YSC gives a very simple example, where the class type is only used as a variable of automatic storage duration and where no pointer to that variable is taken. There, the compiler can inline all the code and can then eliminate all the dead code.



If you have interfaces between functions defined in different translation units, typically the compiler does not know anything. The interfaces follow typically some predefined ABI (like that) such that different object files can be linked together without any problems. Typically ABIs do not make a difference if a member is used or not. So, in such cases, the second member has to be physically in the memory (unless eliminated out later by the linker).



And as long as you are within the boundaries of the language, you cannot observe that any elimination happens. If you call sizeof(Foo), you will get 2*sizeof(int). If you create an array of Foos, the distance between the beginnings of two consecutive objects of Foo is always sizeof(Foo) bytes.



Your type is a standard layout type, which means that you can also access on members based on compile-time computed offsets (cf. the offsetof macro). Moreover, you can inspect the byte-by-byte representation of the object by copying onto an array of char using std::memcpy. In all these cases, the second member can be observed to be there.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Cody Gray
    Mar 8 at 18:09






  • 2





    +1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, . gcc -fwhole-program -O3 *.c could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact value sizeof() has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)

    – Peter Cordes
    Mar 8 at 22:58














6












6








6







In general, you have to assume that you get what you have asked for, for example, the "unused" member variables are there.



Since in your example both members are public, the compiler cannot know if some code (particularly from other translation units = other *.cpp files, which are compiled separately and then linked) would access the "unused" member.



The answer of YSC gives a very simple example, where the class type is only used as a variable of automatic storage duration and where no pointer to that variable is taken. There, the compiler can inline all the code and can then eliminate all the dead code.



If you have interfaces between functions defined in different translation units, typically the compiler does not know anything. The interfaces follow typically some predefined ABI (like that) such that different object files can be linked together without any problems. Typically ABIs do not make a difference if a member is used or not. So, in such cases, the second member has to be physically in the memory (unless eliminated out later by the linker).



And as long as you are within the boundaries of the language, you cannot observe that any elimination happens. If you call sizeof(Foo), you will get 2*sizeof(int). If you create an array of Foos, the distance between the beginnings of two consecutive objects of Foo is always sizeof(Foo) bytes.



Your type is a standard layout type, which means that you can also access on members based on compile-time computed offsets (cf. the offsetof macro). Moreover, you can inspect the byte-by-byte representation of the object by copying onto an array of char using std::memcpy. In all these cases, the second member can be observed to be there.






share|improve this answer















In general, you have to assume that you get what you have asked for, for example, the "unused" member variables are there.



Since in your example both members are public, the compiler cannot know if some code (particularly from other translation units = other *.cpp files, which are compiled separately and then linked) would access the "unused" member.



The answer of YSC gives a very simple example, where the class type is only used as a variable of automatic storage duration and where no pointer to that variable is taken. There, the compiler can inline all the code and can then eliminate all the dead code.



If you have interfaces between functions defined in different translation units, typically the compiler does not know anything. The interfaces follow typically some predefined ABI (like that) such that different object files can be linked together without any problems. Typically ABIs do not make a difference if a member is used or not. So, in such cases, the second member has to be physically in the memory (unless eliminated out later by the linker).



And as long as you are within the boundaries of the language, you cannot observe that any elimination happens. If you call sizeof(Foo), you will get 2*sizeof(int). If you create an array of Foos, the distance between the beginnings of two consecutive objects of Foo is always sizeof(Foo) bytes.



Your type is a standard layout type, which means that you can also access on members based on compile-time computed offsets (cf. the offsetof macro). Moreover, you can inspect the byte-by-byte representation of the object by copying onto an array of char using std::memcpy. In all these cases, the second member can be observed to be there.







share|improve this answer














share|improve this answer



share|improve this answer








edited Mar 9 at 12:02









eFarzad

600822




600822










answered Mar 8 at 14:23









Handy999Handy999

67118




67118












  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Cody Gray
    Mar 8 at 18:09






  • 2





    +1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, . gcc -fwhole-program -O3 *.c could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact value sizeof() has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)

    – Peter Cordes
    Mar 8 at 22:58


















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Cody Gray
    Mar 8 at 18:09






  • 2





    +1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, . gcc -fwhole-program -O3 *.c could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact value sizeof() has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)

    – Peter Cordes
    Mar 8 at 22:58

















Comments are not for extended discussion; this conversation has been moved to chat.

– Cody Gray
Mar 8 at 18:09





Comments are not for extended discussion; this conversation has been moved to chat.

– Cody Gray
Mar 8 at 18:09




2




2





+1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, . gcc -fwhole-program -O3 *.c could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact value sizeof() has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)

– Peter Cordes
Mar 8 at 22:58






+1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, . gcc -fwhole-program -O3 *.c could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact value sizeof() has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)

– Peter Cordes
Mar 8 at 22:58












5














The examples provided by other answers to this question which elide var2 are based on a single optimization technique: constant propagation, and subsequent elision of the whole structure (not the elision of just var2). This is the simple case, and optimizing compilers do implement it.



For unmanaged C/C++ codes the answer is that the compiler will in general not elide var2. As far as I know there is no support for such a C/C++ struct transformation in debugging information, and if the struct is accessible as a variable in a debugger then var2 cannot be elided. As far as I know no current C/C++ compiler can specialize functions according to elision of var2, so if the struct is passed to or returned from a non-inlined function then var2 cannot be elided.



For managed languages such as C#/Java with a JIT compiler the compiler might be able to safely elide var2 because it can precisely track if it is being used and whether it escapes to unmanaged code. The physical size of the struct in managed languages can be different from its size reported to the programmer.



Year 2019 C/C++ compilers cannot elide var2 from the struct unless the whole struct variable is elided. For interesting cases of elision of var2 from the struct, the answer is: No.



Some future C/C++ compilers will be able to elide var2 from the struct, and the ecosystem built around the compilers will need to adapt to process elision information generated by compilers.






share|improve this answer























  • Your paragraph about debug information boils down to "we can't optimize it away if that would make debugging harder", which is just plain wrong. Or I'm misreading. Could you clarify?

    – Max Langhof
    Mar 11 at 8:22











  • If the compiler emits debug information about the struct then it cannot elide var2. Options are: (1) Do not emit the debug information if does not correspond to the physical representation of the struct, (2) Support struct member elision in debug information and emit the debug information

    – atomsymbol
    Mar 11 at 11:47















5














The examples provided by other answers to this question which elide var2 are based on a single optimization technique: constant propagation, and subsequent elision of the whole structure (not the elision of just var2). This is the simple case, and optimizing compilers do implement it.



For unmanaged C/C++ codes the answer is that the compiler will in general not elide var2. As far as I know there is no support for such a C/C++ struct transformation in debugging information, and if the struct is accessible as a variable in a debugger then var2 cannot be elided. As far as I know no current C/C++ compiler can specialize functions according to elision of var2, so if the struct is passed to or returned from a non-inlined function then var2 cannot be elided.



For managed languages such as C#/Java with a JIT compiler the compiler might be able to safely elide var2 because it can precisely track if it is being used and whether it escapes to unmanaged code. The physical size of the struct in managed languages can be different from its size reported to the programmer.



Year 2019 C/C++ compilers cannot elide var2 from the struct unless the whole struct variable is elided. For interesting cases of elision of var2 from the struct, the answer is: No.



Some future C/C++ compilers will be able to elide var2 from the struct, and the ecosystem built around the compilers will need to adapt to process elision information generated by compilers.






share|improve this answer























  • Your paragraph about debug information boils down to "we can't optimize it away if that would make debugging harder", which is just plain wrong. Or I'm misreading. Could you clarify?

    – Max Langhof
    Mar 11 at 8:22











  • If the compiler emits debug information about the struct then it cannot elide var2. Options are: (1) Do not emit the debug information if does not correspond to the physical representation of the struct, (2) Support struct member elision in debug information and emit the debug information

    – atomsymbol
    Mar 11 at 11:47













5












5








5







The examples provided by other answers to this question which elide var2 are based on a single optimization technique: constant propagation, and subsequent elision of the whole structure (not the elision of just var2). This is the simple case, and optimizing compilers do implement it.



For unmanaged C/C++ codes the answer is that the compiler will in general not elide var2. As far as I know there is no support for such a C/C++ struct transformation in debugging information, and if the struct is accessible as a variable in a debugger then var2 cannot be elided. As far as I know no current C/C++ compiler can specialize functions according to elision of var2, so if the struct is passed to or returned from a non-inlined function then var2 cannot be elided.



For managed languages such as C#/Java with a JIT compiler the compiler might be able to safely elide var2 because it can precisely track if it is being used and whether it escapes to unmanaged code. The physical size of the struct in managed languages can be different from its size reported to the programmer.



Year 2019 C/C++ compilers cannot elide var2 from the struct unless the whole struct variable is elided. For interesting cases of elision of var2 from the struct, the answer is: No.



Some future C/C++ compilers will be able to elide var2 from the struct, and the ecosystem built around the compilers will need to adapt to process elision information generated by compilers.






share|improve this answer













The examples provided by other answers to this question which elide var2 are based on a single optimization technique: constant propagation, and subsequent elision of the whole structure (not the elision of just var2). This is the simple case, and optimizing compilers do implement it.



For unmanaged C/C++ codes the answer is that the compiler will in general not elide var2. As far as I know there is no support for such a C/C++ struct transformation in debugging information, and if the struct is accessible as a variable in a debugger then var2 cannot be elided. As far as I know no current C/C++ compiler can specialize functions according to elision of var2, so if the struct is passed to or returned from a non-inlined function then var2 cannot be elided.



For managed languages such as C#/Java with a JIT compiler the compiler might be able to safely elide var2 because it can precisely track if it is being used and whether it escapes to unmanaged code. The physical size of the struct in managed languages can be different from its size reported to the programmer.



Year 2019 C/C++ compilers cannot elide var2 from the struct unless the whole struct variable is elided. For interesting cases of elision of var2 from the struct, the answer is: No.



Some future C/C++ compilers will be able to elide var2 from the struct, and the ecosystem built around the compilers will need to adapt to process elision information generated by compilers.







share|improve this answer












share|improve this answer



share|improve this answer










answered Mar 8 at 18:27









atomsymbolatomsymbol

20859




20859












  • Your paragraph about debug information boils down to "we can't optimize it away if that would make debugging harder", which is just plain wrong. Or I'm misreading. Could you clarify?

    – Max Langhof
    Mar 11 at 8:22











  • If the compiler emits debug information about the struct then it cannot elide var2. Options are: (1) Do not emit the debug information if does not correspond to the physical representation of the struct, (2) Support struct member elision in debug information and emit the debug information

    – atomsymbol
    Mar 11 at 11:47

















  • Your paragraph about debug information boils down to "we can't optimize it away if that would make debugging harder", which is just plain wrong. Or I'm misreading. Could you clarify?

    – Max Langhof
    Mar 11 at 8:22











  • If the compiler emits debug information about the struct then it cannot elide var2. Options are: (1) Do not emit the debug information if does not correspond to the physical representation of the struct, (2) Support struct member elision in debug information and emit the debug information

    – atomsymbol
    Mar 11 at 11:47
















Your paragraph about debug information boils down to "we can't optimize it away if that would make debugging harder", which is just plain wrong. Or I'm misreading. Could you clarify?

– Max Langhof
Mar 11 at 8:22





Your paragraph about debug information boils down to "we can't optimize it away if that would make debugging harder", which is just plain wrong. Or I'm misreading. Could you clarify?

– Max Langhof
Mar 11 at 8:22













If the compiler emits debug information about the struct then it cannot elide var2. Options are: (1) Do not emit the debug information if does not correspond to the physical representation of the struct, (2) Support struct member elision in debug information and emit the debug information

– atomsymbol
Mar 11 at 11:47





If the compiler emits debug information about the struct then it cannot elide var2. Options are: (1) Do not emit the debug information if does not correspond to the physical representation of the struct, (2) Support struct member elision in debug information and emit the debug information

– atomsymbol
Mar 11 at 11:47











4














It's dependent on your compiler and its optimization level.



In gcc, if you specify -O, it will turn on the following optimization flags:



-fauto-inc-dec 
-fbranch-count-reg
-fcombine-stack-adjustments
-fcompare-elim
-fcprop-registers
-fdce
-fdefer-pop
...


-fdce stands for Dead Code Elimination.



You can use __attribute__((used)) to prevent gcc eliminating an unused variable with static storage:




This attribute, attached to a variable with static storage, means that
the variable must be emitted even if it appears that the variable is
not referenced.



When applied to a static data member of a C++ class template, the
attribute also means that the member is instantiated if the class
itself is instantiated.







share|improve this answer

























  • That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.

    – Peter Cordes
    Mar 8 at 23:00
















4














It's dependent on your compiler and its optimization level.



In gcc, if you specify -O, it will turn on the following optimization flags:



-fauto-inc-dec 
-fbranch-count-reg
-fcombine-stack-adjustments
-fcompare-elim
-fcprop-registers
-fdce
-fdefer-pop
...


-fdce stands for Dead Code Elimination.



You can use __attribute__((used)) to prevent gcc eliminating an unused variable with static storage:




This attribute, attached to a variable with static storage, means that
the variable must be emitted even if it appears that the variable is
not referenced.



When applied to a static data member of a C++ class template, the
attribute also means that the member is instantiated if the class
itself is instantiated.







share|improve this answer

























  • That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.

    – Peter Cordes
    Mar 8 at 23:00














4












4








4







It's dependent on your compiler and its optimization level.



In gcc, if you specify -O, it will turn on the following optimization flags:



-fauto-inc-dec 
-fbranch-count-reg
-fcombine-stack-adjustments
-fcompare-elim
-fcprop-registers
-fdce
-fdefer-pop
...


-fdce stands for Dead Code Elimination.



You can use __attribute__((used)) to prevent gcc eliminating an unused variable with static storage:




This attribute, attached to a variable with static storage, means that
the variable must be emitted even if it appears that the variable is
not referenced.



When applied to a static data member of a C++ class template, the
attribute also means that the member is instantiated if the class
itself is instantiated.







share|improve this answer















It's dependent on your compiler and its optimization level.



In gcc, if you specify -O, it will turn on the following optimization flags:



-fauto-inc-dec 
-fbranch-count-reg
-fcombine-stack-adjustments
-fcompare-elim
-fcprop-registers
-fdce
-fdefer-pop
...


-fdce stands for Dead Code Elimination.



You can use __attribute__((used)) to prevent gcc eliminating an unused variable with static storage:




This attribute, attached to a variable with static storage, means that
the variable must be emitted even if it appears that the variable is
not referenced.



When applied to a static data member of a C++ class template, the
attribute also means that the member is instantiated if the class
itself is instantiated.








share|improve this answer














share|improve this answer



share|improve this answer








edited Mar 8 at 12:24









Toby Speight

17.3k134368




17.3k134368










answered Mar 8 at 10:26









wonterwonter

1699




1699












  • That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.

    – Peter Cordes
    Mar 8 at 23:00


















  • That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.

    – Peter Cordes
    Mar 8 at 23:00

















That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.

– Peter Cordes
Mar 8 at 23:00






That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.

– Peter Cordes
Mar 8 at 23:00


















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55060820%2fdoes-an-unused-member-variable-take-up-memory%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Can't initialize raids on a new ASUS Prime B360M-A motherboard2019 Community Moderator ElectionSimilar to RAID config yet more like mirroring solution?Can't get motherboard serial numberWhy does the BIOS entry point start with a WBINVD instruction?UEFI performance Asus Maximus V Extreme

Identity Server 4 is not redirecting to Angular app after login2019 Community Moderator ElectionIdentity Server 4 and dockerIdentityserver implicit flow unauthorized_clientIdentityServer Hybrid Flow - Access Token is null after user successful loginIdentity Server to MVC client : Page Redirect After loginLogin with Steam OpenId(oidc-client-js)Identity Server 4+.NET Core 2.0 + IdentityIdentityServer4 post-login redirect not working in Edge browserCall to IdentityServer4 generates System.NullReferenceException: Object reference not set to an instance of an objectIdentityServer4 without HTTPS not workingHow to get Authorization code from identity server without login form

2005 Ahvaz unrest Contents Background Causes Casualties Aftermath See also References Navigation menue"At Least 10 Are Killed by Bombs in Iran""Iran"Archived"Arab-Iranians in Iran to make April 15 'Day of Fury'"State of Mind, State of Order: Reactions to Ethnic Unrest in the Islamic Republic of Iran.10.1111/j.1754-9469.2008.00028.x"Iran hangs Arab separatists"Iran Overview from ArchivedConstitution of the Islamic Republic of Iran"Tehran puzzled by forged 'riots' letter""Iran and its minorities: Down in the second class""Iran: Handling Of Ahvaz Unrest Could End With Televised Confessions""Bombings Rock Iran Ahead of Election""Five die in Iran ethnic clashes""Iran: Need for restraint as anniversary of unrest in Khuzestan approaches"Archived"Iranian Sunni protesters killed in clashes with security forces"Archived