converting list of tensors to tensors pytorch2019 Community Moderator ElectionHow do I check if a list is empty?Finding the index of an item given a list containing it in PythonDifference between append vs. extend list methods in PythonHow to return multiple values from a function?How to make a flat list out of list of lists?“Least Astonishment” and the Mutable Default ArgumentHow do I list all files of a directory?Get difference between two listsFastest way to check if a value exist in a listWhy is “1000000000000000 in range(1000000000000001)” so fast in Python 3?

How can an organ that provides biological immortality be unable to regenerate?

Have any astronauts/cosmonauts died in space?

Is VPN a layer 3 concept?

Is xar preinstalled on macOS?

Why is there so much iron?

Why are there no stars visible in cislunar space?

is this saw blade faulty?

Which partition to make active?

What is 管理しきれず?

How do researchers send unsolicited emails asking for feedback on their works?

Does fire aspect on a sword, destroy mob drops?

PTIJ: Why do we make a Lulav holder?

label a part of commutative diagram

Why is "la Gestapo" feminine?

Determine voltage drop over 10G resistors with cheap multimeter

Output visual diagram of picture

Could any one tell what PN is this Chip? Thanks~

Can other pieces capture a threatening piece and prevent a checkmate?

How to find the largest number(s) in a list of elements, possibly non-unique?

What is it called when someone votes for an option that's not their first choice?

Jem'Hadar, something strange about their life expectancy

Did Nintendo change its mind about 68000 SNES?

Norwegian Refugee travel document

Hot air balloons as primitive bombers



converting list of tensors to tensors pytorch



2019 Community Moderator ElectionHow do I check if a list is empty?Finding the index of an item given a list containing it in PythonDifference between append vs. extend list methods in PythonHow to return multiple values from a function?How to make a flat list out of list of lists?“Least Astonishment” and the Mutable Default ArgumentHow do I list all files of a directory?Get difference between two listsFastest way to check if a value exist in a listWhy is “1000000000000000 in range(1000000000000001)” so fast in Python 3?










0















I have list of tensor each tensor has different size how can I convert this list of tensors into a tensor using pytroch



for more info my list contains tensors each tensor have different size
for example the first tensor size is torch.Size([76080, 38])



the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])



when I use
torch.tensor(x)
I get an error
ValueError: only one element tensors can be converted to Python scalars










share|improve this question



















  • 1





    Please provide more of your code.

    – Fábio Perez
    Mar 7 at 21:52











  • for item in features: x.append(torch.tensor((item)))

    – Omar Abdelaziz
    Mar 7 at 23:46











  • this gives me a list of tensors but each tensor have different size so when I try torch.stack(x) it gives me the same error @FábioPerez

    – Omar Abdelaziz
    Mar 7 at 23:48















0















I have list of tensor each tensor has different size how can I convert this list of tensors into a tensor using pytroch



for more info my list contains tensors each tensor have different size
for example the first tensor size is torch.Size([76080, 38])



the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])



when I use
torch.tensor(x)
I get an error
ValueError: only one element tensors can be converted to Python scalars










share|improve this question



















  • 1





    Please provide more of your code.

    – Fábio Perez
    Mar 7 at 21:52











  • for item in features: x.append(torch.tensor((item)))

    – Omar Abdelaziz
    Mar 7 at 23:46











  • this gives me a list of tensors but each tensor have different size so when I try torch.stack(x) it gives me the same error @FábioPerez

    – Omar Abdelaziz
    Mar 7 at 23:48













0












0








0








I have list of tensor each tensor has different size how can I convert this list of tensors into a tensor using pytroch



for more info my list contains tensors each tensor have different size
for example the first tensor size is torch.Size([76080, 38])



the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])



when I use
torch.tensor(x)
I get an error
ValueError: only one element tensors can be converted to Python scalars










share|improve this question
















I have list of tensor each tensor has different size how can I convert this list of tensors into a tensor using pytroch



for more info my list contains tensors each tensor have different size
for example the first tensor size is torch.Size([76080, 38])



the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])



when I use
torch.tensor(x)
I get an error
ValueError: only one element tensors can be converted to Python scalars







python pytorch






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 8 at 20:08







Omar Abdelaziz

















asked Mar 7 at 18:40









Omar AbdelazizOmar Abdelaziz

85




85







  • 1





    Please provide more of your code.

    – Fábio Perez
    Mar 7 at 21:52











  • for item in features: x.append(torch.tensor((item)))

    – Omar Abdelaziz
    Mar 7 at 23:46











  • this gives me a list of tensors but each tensor have different size so when I try torch.stack(x) it gives me the same error @FábioPerez

    – Omar Abdelaziz
    Mar 7 at 23:48












  • 1





    Please provide more of your code.

    – Fábio Perez
    Mar 7 at 21:52











  • for item in features: x.append(torch.tensor((item)))

    – Omar Abdelaziz
    Mar 7 at 23:46











  • this gives me a list of tensors but each tensor have different size so when I try torch.stack(x) it gives me the same error @FábioPerez

    – Omar Abdelaziz
    Mar 7 at 23:48







1




1





Please provide more of your code.

– Fábio Perez
Mar 7 at 21:52





Please provide more of your code.

– Fábio Perez
Mar 7 at 21:52













for item in features: x.append(torch.tensor((item)))

– Omar Abdelaziz
Mar 7 at 23:46





for item in features: x.append(torch.tensor((item)))

– Omar Abdelaziz
Mar 7 at 23:46













this gives me a list of tensors but each tensor have different size so when I try torch.stack(x) it gives me the same error @FábioPerez

– Omar Abdelaziz
Mar 7 at 23:48





this gives me a list of tensors but each tensor have different size so when I try torch.stack(x) it gives me the same error @FábioPerez

– Omar Abdelaziz
Mar 7 at 23:48












2 Answers
2






active

oldest

votes


















0














tensors cant hold variable length data. you might be looking for cat



for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their data.



also note that you can't use cat with half tensors on cpu as of right now so you should convert them to float, do the concatenation and then convert back to half



import torch

a = torch.arange(8).reshape(2, 2, 2)
b = torch.arange(12).reshape(2, 2, 3)
my_list = [a, b]
my_tensor = torch.cat([a, b], dim=2)
print(my_tensor.shape) #torch.Size([2, 2, 5])


you haven't explained your goal so another option is to use pad_sequence like this:



from torch.nn.utils.rnn import pad_sequence
a = torch.ones(25, 300)
b = torch.ones(22, 300)
c = torch.ones(15, 300)
pad_sequence([a, b, c]).size() #torch.Size([25, 3, 300])


edit: in this particular case, you can use torch.cat([x.float() for x in sequence], dim=1).half()






share|improve this answer




















  • 1





    Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?

    – Omar Abdelaziz
    Mar 8 at 19:20











  • also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])

    – Omar Abdelaziz
    Mar 8 at 19:25











  • the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])

    – Omar Abdelaziz
    Mar 8 at 19:26











  • when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor

    – Omar Abdelaziz
    Mar 8 at 19:28











  • Unfortunately I get the same error

    – Omar Abdelaziz
    Mar 8 at 19:31


















1














Tensor in pytorch isn't like List in python, which could hold variable length of objects.



In pytorch, you can transfer a fixed length array to Tensor:



>>> torch.Tensor([[1, 2], [3, 4]])
>>> tensor([[1., 2.],
[3., 4.]])


Rather than:



>>> torch.Tensor([[1, 2], [3, 4, 5]])
>>>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-809c707011cc> in <module>
----> 1 torch.Tensor([[1, 2], [3, 4, 5]])

ValueError: expected sequence of length 2 at dim 1 (got 3)


And it's same to torch.stack.






share|improve this answer























  • Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor

    – Omar Abdelaziz
    Mar 8 at 19:08










Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55050717%2fconverting-list-of-tensors-to-tensors-pytorch%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














tensors cant hold variable length data. you might be looking for cat



for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their data.



also note that you can't use cat with half tensors on cpu as of right now so you should convert them to float, do the concatenation and then convert back to half



import torch

a = torch.arange(8).reshape(2, 2, 2)
b = torch.arange(12).reshape(2, 2, 3)
my_list = [a, b]
my_tensor = torch.cat([a, b], dim=2)
print(my_tensor.shape) #torch.Size([2, 2, 5])


you haven't explained your goal so another option is to use pad_sequence like this:



from torch.nn.utils.rnn import pad_sequence
a = torch.ones(25, 300)
b = torch.ones(22, 300)
c = torch.ones(15, 300)
pad_sequence([a, b, c]).size() #torch.Size([25, 3, 300])


edit: in this particular case, you can use torch.cat([x.float() for x in sequence], dim=1).half()






share|improve this answer




















  • 1





    Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?

    – Omar Abdelaziz
    Mar 8 at 19:20











  • also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])

    – Omar Abdelaziz
    Mar 8 at 19:25











  • the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])

    – Omar Abdelaziz
    Mar 8 at 19:26











  • when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor

    – Omar Abdelaziz
    Mar 8 at 19:28











  • Unfortunately I get the same error

    – Omar Abdelaziz
    Mar 8 at 19:31















0














tensors cant hold variable length data. you might be looking for cat



for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their data.



also note that you can't use cat with half tensors on cpu as of right now so you should convert them to float, do the concatenation and then convert back to half



import torch

a = torch.arange(8).reshape(2, 2, 2)
b = torch.arange(12).reshape(2, 2, 3)
my_list = [a, b]
my_tensor = torch.cat([a, b], dim=2)
print(my_tensor.shape) #torch.Size([2, 2, 5])


you haven't explained your goal so another option is to use pad_sequence like this:



from torch.nn.utils.rnn import pad_sequence
a = torch.ones(25, 300)
b = torch.ones(22, 300)
c = torch.ones(15, 300)
pad_sequence([a, b, c]).size() #torch.Size([25, 3, 300])


edit: in this particular case, you can use torch.cat([x.float() for x in sequence], dim=1).half()






share|improve this answer




















  • 1





    Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?

    – Omar Abdelaziz
    Mar 8 at 19:20











  • also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])

    – Omar Abdelaziz
    Mar 8 at 19:25











  • the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])

    – Omar Abdelaziz
    Mar 8 at 19:26











  • when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor

    – Omar Abdelaziz
    Mar 8 at 19:28











  • Unfortunately I get the same error

    – Omar Abdelaziz
    Mar 8 at 19:31













0












0








0







tensors cant hold variable length data. you might be looking for cat



for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their data.



also note that you can't use cat with half tensors on cpu as of right now so you should convert them to float, do the concatenation and then convert back to half



import torch

a = torch.arange(8).reshape(2, 2, 2)
b = torch.arange(12).reshape(2, 2, 3)
my_list = [a, b]
my_tensor = torch.cat([a, b], dim=2)
print(my_tensor.shape) #torch.Size([2, 2, 5])


you haven't explained your goal so another option is to use pad_sequence like this:



from torch.nn.utils.rnn import pad_sequence
a = torch.ones(25, 300)
b = torch.ones(22, 300)
c = torch.ones(15, 300)
pad_sequence([a, b, c]).size() #torch.Size([25, 3, 300])


edit: in this particular case, you can use torch.cat([x.float() for x in sequence], dim=1).half()






share|improve this answer















tensors cant hold variable length data. you might be looking for cat



for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their data.



also note that you can't use cat with half tensors on cpu as of right now so you should convert them to float, do the concatenation and then convert back to half



import torch

a = torch.arange(8).reshape(2, 2, 2)
b = torch.arange(12).reshape(2, 2, 3)
my_list = [a, b]
my_tensor = torch.cat([a, b], dim=2)
print(my_tensor.shape) #torch.Size([2, 2, 5])


you haven't explained your goal so another option is to use pad_sequence like this:



from torch.nn.utils.rnn import pad_sequence
a = torch.ones(25, 300)
b = torch.ones(22, 300)
c = torch.ones(15, 300)
pad_sequence([a, b, c]).size() #torch.Size([25, 3, 300])


edit: in this particular case, you can use torch.cat([x.float() for x in sequence], dim=1).half()







share|improve this answer














share|improve this answer



share|improve this answer








edited Mar 8 at 20:02

























answered Mar 8 at 18:42









SepariusSeparius

164213




164213







  • 1





    Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?

    – Omar Abdelaziz
    Mar 8 at 19:20











  • also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])

    – Omar Abdelaziz
    Mar 8 at 19:25











  • the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])

    – Omar Abdelaziz
    Mar 8 at 19:26











  • when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor

    – Omar Abdelaziz
    Mar 8 at 19:28











  • Unfortunately I get the same error

    – Omar Abdelaziz
    Mar 8 at 19:31












  • 1





    Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?

    – Omar Abdelaziz
    Mar 8 at 19:20











  • also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])

    – Omar Abdelaziz
    Mar 8 at 19:25











  • the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])

    – Omar Abdelaziz
    Mar 8 at 19:26











  • when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor

    – Omar Abdelaziz
    Mar 8 at 19:28











  • Unfortunately I get the same error

    – Omar Abdelaziz
    Mar 8 at 19:31







1




1





Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?

– Omar Abdelaziz
Mar 8 at 19:20





Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?

– Omar Abdelaziz
Mar 8 at 19:20













also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])

– Omar Abdelaziz
Mar 8 at 19:25





also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])

– Omar Abdelaziz
Mar 8 at 19:25













the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])

– Omar Abdelaziz
Mar 8 at 19:26





the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])

– Omar Abdelaziz
Mar 8 at 19:26













when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor

– Omar Abdelaziz
Mar 8 at 19:28





when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor

– Omar Abdelaziz
Mar 8 at 19:28













Unfortunately I get the same error

– Omar Abdelaziz
Mar 8 at 19:31





Unfortunately I get the same error

– Omar Abdelaziz
Mar 8 at 19:31













1














Tensor in pytorch isn't like List in python, which could hold variable length of objects.



In pytorch, you can transfer a fixed length array to Tensor:



>>> torch.Tensor([[1, 2], [3, 4]])
>>> tensor([[1., 2.],
[3., 4.]])


Rather than:



>>> torch.Tensor([[1, 2], [3, 4, 5]])
>>>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-809c707011cc> in <module>
----> 1 torch.Tensor([[1, 2], [3, 4, 5]])

ValueError: expected sequence of length 2 at dim 1 (got 3)


And it's same to torch.stack.






share|improve this answer























  • Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor

    – Omar Abdelaziz
    Mar 8 at 19:08















1














Tensor in pytorch isn't like List in python, which could hold variable length of objects.



In pytorch, you can transfer a fixed length array to Tensor:



>>> torch.Tensor([[1, 2], [3, 4]])
>>> tensor([[1., 2.],
[3., 4.]])


Rather than:



>>> torch.Tensor([[1, 2], [3, 4, 5]])
>>>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-809c707011cc> in <module>
----> 1 torch.Tensor([[1, 2], [3, 4, 5]])

ValueError: expected sequence of length 2 at dim 1 (got 3)


And it's same to torch.stack.






share|improve this answer























  • Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor

    – Omar Abdelaziz
    Mar 8 at 19:08













1












1








1







Tensor in pytorch isn't like List in python, which could hold variable length of objects.



In pytorch, you can transfer a fixed length array to Tensor:



>>> torch.Tensor([[1, 2], [3, 4]])
>>> tensor([[1., 2.],
[3., 4.]])


Rather than:



>>> torch.Tensor([[1, 2], [3, 4, 5]])
>>>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-809c707011cc> in <module>
----> 1 torch.Tensor([[1, 2], [3, 4, 5]])

ValueError: expected sequence of length 2 at dim 1 (got 3)


And it's same to torch.stack.






share|improve this answer













Tensor in pytorch isn't like List in python, which could hold variable length of objects.



In pytorch, you can transfer a fixed length array to Tensor:



>>> torch.Tensor([[1, 2], [3, 4]])
>>> tensor([[1., 2.],
[3., 4.]])


Rather than:



>>> torch.Tensor([[1, 2], [3, 4, 5]])
>>>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-809c707011cc> in <module>
----> 1 torch.Tensor([[1, 2], [3, 4, 5]])

ValueError: expected sequence of length 2 at dim 1 (got 3)


And it's same to torch.stack.







share|improve this answer












share|improve this answer



share|improve this answer










answered Mar 8 at 7:41









cloudyyyyycloudyyyyy

17917




17917












  • Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor

    – Omar Abdelaziz
    Mar 8 at 19:08

















  • Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor

    – Omar Abdelaziz
    Mar 8 at 19:08
















Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor

– Omar Abdelaziz
Mar 8 at 19:08





Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor

– Omar Abdelaziz
Mar 8 at 19:08

















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55050717%2fconverting-list-of-tensors-to-tensors-pytorch%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Can't initialize raids on a new ASUS Prime B360M-A motherboard2019 Community Moderator ElectionSimilar to RAID config yet more like mirroring solution?Can't get motherboard serial numberWhy does the BIOS entry point start with a WBINVD instruction?UEFI performance Asus Maximus V Extreme

Identity Server 4 is not redirecting to Angular app after login2019 Community Moderator ElectionIdentity Server 4 and dockerIdentityserver implicit flow unauthorized_clientIdentityServer Hybrid Flow - Access Token is null after user successful loginIdentity Server to MVC client : Page Redirect After loginLogin with Steam OpenId(oidc-client-js)Identity Server 4+.NET Core 2.0 + IdentityIdentityServer4 post-login redirect not working in Edge browserCall to IdentityServer4 generates System.NullReferenceException: Object reference not set to an instance of an objectIdentityServer4 without HTTPS not workingHow to get Authorization code from identity server without login form

2005 Ahvaz unrest Contents Background Causes Casualties Aftermath See also References Navigation menue"At Least 10 Are Killed by Bombs in Iran""Iran"Archived"Arab-Iranians in Iran to make April 15 'Day of Fury'"State of Mind, State of Order: Reactions to Ethnic Unrest in the Islamic Republic of Iran.10.1111/j.1754-9469.2008.00028.x"Iran hangs Arab separatists"Iran Overview from ArchivedConstitution of the Islamic Republic of Iran"Tehran puzzled by forged 'riots' letter""Iran and its minorities: Down in the second class""Iran: Handling Of Ahvaz Unrest Could End With Televised Confessions""Bombings Rock Iran Ahead of Election""Five die in Iran ethnic clashes""Iran: Need for restraint as anniversary of unrest in Khuzestan approaches"Archived"Iranian Sunni protesters killed in clashes with security forces"Archived