converting list of tensors to tensors pytorch2019 Community Moderator ElectionHow do I check if a list is empty?Finding the index of an item given a list containing it in PythonDifference between append vs. extend list methods in PythonHow to return multiple values from a function?How to make a flat list out of list of lists?“Least Astonishment” and the Mutable Default ArgumentHow do I list all files of a directory?Get difference between two listsFastest way to check if a value exist in a listWhy is “1000000000000000 in range(1000000000000001)” so fast in Python 3?
How can an organ that provides biological immortality be unable to regenerate?
Have any astronauts/cosmonauts died in space?
Is VPN a layer 3 concept?
Is xar preinstalled on macOS?
Why is there so much iron?
Why are there no stars visible in cislunar space?
is this saw blade faulty?
Which partition to make active?
What is 管理しきれず?
How do researchers send unsolicited emails asking for feedback on their works?
Does fire aspect on a sword, destroy mob drops?
PTIJ: Why do we make a Lulav holder?
label a part of commutative diagram
Why is "la Gestapo" feminine?
Determine voltage drop over 10G resistors with cheap multimeter
Output visual diagram of picture
Could any one tell what PN is this Chip? Thanks~
Can other pieces capture a threatening piece and prevent a checkmate?
How to find the largest number(s) in a list of elements, possibly non-unique?
What is it called when someone votes for an option that's not their first choice?
Jem'Hadar, something strange about their life expectancy
Did Nintendo change its mind about 68000 SNES?
Norwegian Refugee travel document
Hot air balloons as primitive bombers
converting list of tensors to tensors pytorch
2019 Community Moderator ElectionHow do I check if a list is empty?Finding the index of an item given a list containing it in PythonDifference between append vs. extend list methods in PythonHow to return multiple values from a function?How to make a flat list out of list of lists?“Least Astonishment” and the Mutable Default ArgumentHow do I list all files of a directory?Get difference between two listsFastest way to check if a value exist in a listWhy is “1000000000000000 in range(1000000000000001)” so fast in Python 3?
I have list of tensor each tensor has different size how can I convert this list of tensors into a tensor using pytroch
for more info my list contains tensors each tensor have different size
for example the first tensor size is torch.Size([76080, 38])
the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])
when I use
torch.tensor(x)
I get an error
ValueError: only one element tensors can be converted to Python scalars
python pytorch
add a comment |
I have list of tensor each tensor has different size how can I convert this list of tensors into a tensor using pytroch
for more info my list contains tensors each tensor have different size
for example the first tensor size is torch.Size([76080, 38])
the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])
when I use
torch.tensor(x)
I get an error
ValueError: only one element tensors can be converted to Python scalars
python pytorch
1
Please provide more of your code.
– Fábio Perez
Mar 7 at 21:52
for item in features: x.append(torch.tensor((item)))
– Omar Abdelaziz
Mar 7 at 23:46
this gives me a list of tensors but each tensor have different size so when I try torch.stack(x) it gives me the same error @FábioPerez
– Omar Abdelaziz
Mar 7 at 23:48
add a comment |
I have list of tensor each tensor has different size how can I convert this list of tensors into a tensor using pytroch
for more info my list contains tensors each tensor have different size
for example the first tensor size is torch.Size([76080, 38])
the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])
when I use
torch.tensor(x)
I get an error
ValueError: only one element tensors can be converted to Python scalars
python pytorch
I have list of tensor each tensor has different size how can I convert this list of tensors into a tensor using pytroch
for more info my list contains tensors each tensor have different size
for example the first tensor size is torch.Size([76080, 38])
the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])
when I use
torch.tensor(x)
I get an error
ValueError: only one element tensors can be converted to Python scalars
python pytorch
python pytorch
edited Mar 8 at 20:08
Omar Abdelaziz
asked Mar 7 at 18:40
Omar AbdelazizOmar Abdelaziz
85
85
1
Please provide more of your code.
– Fábio Perez
Mar 7 at 21:52
for item in features: x.append(torch.tensor((item)))
– Omar Abdelaziz
Mar 7 at 23:46
this gives me a list of tensors but each tensor have different size so when I try torch.stack(x) it gives me the same error @FábioPerez
– Omar Abdelaziz
Mar 7 at 23:48
add a comment |
1
Please provide more of your code.
– Fábio Perez
Mar 7 at 21:52
for item in features: x.append(torch.tensor((item)))
– Omar Abdelaziz
Mar 7 at 23:46
this gives me a list of tensors but each tensor have different size so when I try torch.stack(x) it gives me the same error @FábioPerez
– Omar Abdelaziz
Mar 7 at 23:48
1
1
Please provide more of your code.
– Fábio Perez
Mar 7 at 21:52
Please provide more of your code.
– Fábio Perez
Mar 7 at 21:52
for item in features: x.append(torch.tensor((item)))
– Omar Abdelaziz
Mar 7 at 23:46
for item in features: x.append(torch.tensor((item)))
– Omar Abdelaziz
Mar 7 at 23:46
this gives me a list of tensors but each tensor have different size so when I try torch.stack(x) it gives me the same error @FábioPerez
– Omar Abdelaziz
Mar 7 at 23:48
this gives me a list of tensors but each tensor have different size so when I try torch.stack(x) it gives me the same error @FábioPerez
– Omar Abdelaziz
Mar 7 at 23:48
add a comment |
2 Answers
2
active
oldest
votes
tensors cant hold variable length data. you might be looking for cat
for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their data.
also note that you can't use cat with half tensors on cpu as of right now so you should convert them to float, do the concatenation and then convert back to half
import torch
a = torch.arange(8).reshape(2, 2, 2)
b = torch.arange(12).reshape(2, 2, 3)
my_list = [a, b]
my_tensor = torch.cat([a, b], dim=2)
print(my_tensor.shape) #torch.Size([2, 2, 5])
you haven't explained your goal so another option is to use pad_sequence like this:
from torch.nn.utils.rnn import pad_sequence
a = torch.ones(25, 300)
b = torch.ones(22, 300)
c = torch.ones(15, 300)
pad_sequence([a, b, c]).size() #torch.Size([25, 3, 300])
edit: in this particular case, you can use torch.cat([x.float() for x in sequence], dim=1).half()
1
Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?
– Omar Abdelaziz
Mar 8 at 19:20
also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])
– Omar Abdelaziz
Mar 8 at 19:25
the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])
– Omar Abdelaziz
Mar 8 at 19:26
when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor
– Omar Abdelaziz
Mar 8 at 19:28
Unfortunately I get the same error
– Omar Abdelaziz
Mar 8 at 19:31
|
show 12 more comments
Tensor
in pytorch isn't like List
in python, which could hold variable length of objects.
In pytorch, you can transfer a fixed length array to Tensor:
>>> torch.Tensor([[1, 2], [3, 4]])
>>> tensor([[1., 2.],
[3., 4.]])
Rather than:
>>> torch.Tensor([[1, 2], [3, 4, 5]])
>>>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-809c707011cc> in <module>
----> 1 torch.Tensor([[1, 2], [3, 4, 5]])
ValueError: expected sequence of length 2 at dim 1 (got 3)
And it's same to torch.stack
.
Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor
– Omar Abdelaziz
Mar 8 at 19:08
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55050717%2fconverting-list-of-tensors-to-tensors-pytorch%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
tensors cant hold variable length data. you might be looking for cat
for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their data.
also note that you can't use cat with half tensors on cpu as of right now so you should convert them to float, do the concatenation and then convert back to half
import torch
a = torch.arange(8).reshape(2, 2, 2)
b = torch.arange(12).reshape(2, 2, 3)
my_list = [a, b]
my_tensor = torch.cat([a, b], dim=2)
print(my_tensor.shape) #torch.Size([2, 2, 5])
you haven't explained your goal so another option is to use pad_sequence like this:
from torch.nn.utils.rnn import pad_sequence
a = torch.ones(25, 300)
b = torch.ones(22, 300)
c = torch.ones(15, 300)
pad_sequence([a, b, c]).size() #torch.Size([25, 3, 300])
edit: in this particular case, you can use torch.cat([x.float() for x in sequence], dim=1).half()
1
Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?
– Omar Abdelaziz
Mar 8 at 19:20
also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])
– Omar Abdelaziz
Mar 8 at 19:25
the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])
– Omar Abdelaziz
Mar 8 at 19:26
when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor
– Omar Abdelaziz
Mar 8 at 19:28
Unfortunately I get the same error
– Omar Abdelaziz
Mar 8 at 19:31
|
show 12 more comments
tensors cant hold variable length data. you might be looking for cat
for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their data.
also note that you can't use cat with half tensors on cpu as of right now so you should convert them to float, do the concatenation and then convert back to half
import torch
a = torch.arange(8).reshape(2, 2, 2)
b = torch.arange(12).reshape(2, 2, 3)
my_list = [a, b]
my_tensor = torch.cat([a, b], dim=2)
print(my_tensor.shape) #torch.Size([2, 2, 5])
you haven't explained your goal so another option is to use pad_sequence like this:
from torch.nn.utils.rnn import pad_sequence
a = torch.ones(25, 300)
b = torch.ones(22, 300)
c = torch.ones(15, 300)
pad_sequence([a, b, c]).size() #torch.Size([25, 3, 300])
edit: in this particular case, you can use torch.cat([x.float() for x in sequence], dim=1).half()
1
Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?
– Omar Abdelaziz
Mar 8 at 19:20
also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])
– Omar Abdelaziz
Mar 8 at 19:25
the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])
– Omar Abdelaziz
Mar 8 at 19:26
when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor
– Omar Abdelaziz
Mar 8 at 19:28
Unfortunately I get the same error
– Omar Abdelaziz
Mar 8 at 19:31
|
show 12 more comments
tensors cant hold variable length data. you might be looking for cat
for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their data.
also note that you can't use cat with half tensors on cpu as of right now so you should convert them to float, do the concatenation and then convert back to half
import torch
a = torch.arange(8).reshape(2, 2, 2)
b = torch.arange(12).reshape(2, 2, 3)
my_list = [a, b]
my_tensor = torch.cat([a, b], dim=2)
print(my_tensor.shape) #torch.Size([2, 2, 5])
you haven't explained your goal so another option is to use pad_sequence like this:
from torch.nn.utils.rnn import pad_sequence
a = torch.ones(25, 300)
b = torch.ones(22, 300)
c = torch.ones(15, 300)
pad_sequence([a, b, c]).size() #torch.Size([25, 3, 300])
edit: in this particular case, you can use torch.cat([x.float() for x in sequence], dim=1).half()
tensors cant hold variable length data. you might be looking for cat
for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their data.
also note that you can't use cat with half tensors on cpu as of right now so you should convert them to float, do the concatenation and then convert back to half
import torch
a = torch.arange(8).reshape(2, 2, 2)
b = torch.arange(12).reshape(2, 2, 3)
my_list = [a, b]
my_tensor = torch.cat([a, b], dim=2)
print(my_tensor.shape) #torch.Size([2, 2, 5])
you haven't explained your goal so another option is to use pad_sequence like this:
from torch.nn.utils.rnn import pad_sequence
a = torch.ones(25, 300)
b = torch.ones(22, 300)
c = torch.ones(15, 300)
pad_sequence([a, b, c]).size() #torch.Size([25, 3, 300])
edit: in this particular case, you can use torch.cat([x.float() for x in sequence], dim=1).half()
edited Mar 8 at 20:02
answered Mar 8 at 18:42
SepariusSeparius
164213
164213
1
Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?
– Omar Abdelaziz
Mar 8 at 19:20
also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])
– Omar Abdelaziz
Mar 8 at 19:25
the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])
– Omar Abdelaziz
Mar 8 at 19:26
when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor
– Omar Abdelaziz
Mar 8 at 19:28
Unfortunately I get the same error
– Omar Abdelaziz
Mar 8 at 19:31
|
show 12 more comments
1
Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?
– Omar Abdelaziz
Mar 8 at 19:20
also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])
– Omar Abdelaziz
Mar 8 at 19:25
the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])
– Omar Abdelaziz
Mar 8 at 19:26
when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor
– Omar Abdelaziz
Mar 8 at 19:28
Unfortunately I get the same error
– Omar Abdelaziz
Mar 8 at 19:31
1
1
Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?
– Omar Abdelaziz
Mar 8 at 19:20
Hey Separius thanks for the answer but could u explain what dim is and How should I set it according to?
– Omar Abdelaziz
Mar 8 at 19:20
also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])
– Omar Abdelaziz
Mar 8 at 19:25
also I have a list of tensor sorted descendly and the shape of the first tensor is torch.Size([76080, 38])
– Omar Abdelaziz
Mar 8 at 19:25
the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])
– Omar Abdelaziz
Mar 8 at 19:26
the shape of the other tensors would differ in the second element for example the second tensor in the list is torch.Size([76080, 36])
– Omar Abdelaziz
Mar 8 at 19:26
when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor
– Omar Abdelaziz
Mar 8 at 19:28
when I remove the dim I get this error RuntimeError: _th_cat is not implemented for type torch.HalfTensor
– Omar Abdelaziz
Mar 8 at 19:28
Unfortunately I get the same error
– Omar Abdelaziz
Mar 8 at 19:31
Unfortunately I get the same error
– Omar Abdelaziz
Mar 8 at 19:31
|
show 12 more comments
Tensor
in pytorch isn't like List
in python, which could hold variable length of objects.
In pytorch, you can transfer a fixed length array to Tensor:
>>> torch.Tensor([[1, 2], [3, 4]])
>>> tensor([[1., 2.],
[3., 4.]])
Rather than:
>>> torch.Tensor([[1, 2], [3, 4, 5]])
>>>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-809c707011cc> in <module>
----> 1 torch.Tensor([[1, 2], [3, 4, 5]])
ValueError: expected sequence of length 2 at dim 1 (got 3)
And it's same to torch.stack
.
Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor
– Omar Abdelaziz
Mar 8 at 19:08
add a comment |
Tensor
in pytorch isn't like List
in python, which could hold variable length of objects.
In pytorch, you can transfer a fixed length array to Tensor:
>>> torch.Tensor([[1, 2], [3, 4]])
>>> tensor([[1., 2.],
[3., 4.]])
Rather than:
>>> torch.Tensor([[1, 2], [3, 4, 5]])
>>>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-809c707011cc> in <module>
----> 1 torch.Tensor([[1, 2], [3, 4, 5]])
ValueError: expected sequence of length 2 at dim 1 (got 3)
And it's same to torch.stack
.
Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor
– Omar Abdelaziz
Mar 8 at 19:08
add a comment |
Tensor
in pytorch isn't like List
in python, which could hold variable length of objects.
In pytorch, you can transfer a fixed length array to Tensor:
>>> torch.Tensor([[1, 2], [3, 4]])
>>> tensor([[1., 2.],
[3., 4.]])
Rather than:
>>> torch.Tensor([[1, 2], [3, 4, 5]])
>>>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-809c707011cc> in <module>
----> 1 torch.Tensor([[1, 2], [3, 4, 5]])
ValueError: expected sequence of length 2 at dim 1 (got 3)
And it's same to torch.stack
.
Tensor
in pytorch isn't like List
in python, which could hold variable length of objects.
In pytorch, you can transfer a fixed length array to Tensor:
>>> torch.Tensor([[1, 2], [3, 4]])
>>> tensor([[1., 2.],
[3., 4.]])
Rather than:
>>> torch.Tensor([[1, 2], [3, 4, 5]])
>>>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-809c707011cc> in <module>
----> 1 torch.Tensor([[1, 2], [3, 4, 5]])
ValueError: expected sequence of length 2 at dim 1 (got 3)
And it's same to torch.stack
.
answered Mar 8 at 7:41
cloudyyyyycloudyyyyy
17917
17917
Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor
– Omar Abdelaziz
Mar 8 at 19:08
add a comment |
Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor
– Omar Abdelaziz
Mar 8 at 19:08
Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor
– Omar Abdelaziz
Mar 8 at 19:08
Hi cloudyy , thank u for ur answer it's helpful.... but is there any solution if I have different list of tensors with different lengths to stack then in one tensor
– Omar Abdelaziz
Mar 8 at 19:08
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55050717%2fconverting-list-of-tensors-to-tensors-pytorch%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Please provide more of your code.
– Fábio Perez
Mar 7 at 21:52
for item in features: x.append(torch.tensor((item)))
– Omar Abdelaziz
Mar 7 at 23:46
this gives me a list of tensors but each tensor have different size so when I try torch.stack(x) it gives me the same error @FábioPerez
– Omar Abdelaziz
Mar 7 at 23:48