Creating a DataFrame from RDD while specifying DateType() in schema2019 Community Moderator ElectionHow to remove items from a list while iterating?Delete column from pandas DataFrame by column nameSelect rows from a DataFrame based on values in a column in pandasGet list from pandas DataFrame column headersDifference between DataFrame, Dataset, and RDD in SparkConvert json string to modified RDDSpark SQL(v2.0) UDAF in Scala returns empty stringEnforcing a schema on RDD while converting them to DataFrameHow do I apply schema with nullable = false to json readingpyspark load csv file into dataframe using a schema

What is the likely impact of grounding an entire aircraft series?

In the late 1940’s to early 1950’s what technology was available that could melt a LOT of ice?

How much stiffer are 23c tires over 28c?

show this identity with trigometric

Solving "Resistance between two nodes on a grid" problem in Mathematica

Why is this plane circling around the Lucknow airport every day?

Can you reject a postdoc offer after the PI has paid a large sum for flights/accommodation for your visit?

Make a transparent 448*448 image

Why doesn't this Google Translate ad use the word "Translation" instead of "Translate"?

What are some noteworthy "mic-drop" moments in math?

They call me Inspector Morse

Do f-stop and exposure time perfectly cancel?

PTIJ: Why can't I eat anything?

String reversal in Python

Virginia employer terminated employee and wants signing bonus returned

Finding algorithms of QGIS commands?

Should QA ask requirements to developers?

How does airport security verify that you can carry a battery bank over 100 Wh?

Why is there a voltage between the mains ground and my radiator?

How do anti-virus programs start at Windows boot?

Placing subfig vertically

What to do when during a meeting client people start to fight (even physically) with each others?

Can't find the Shader/UVs tab

How to pass a string to a command that expects a file?



Creating a DataFrame from RDD while specifying DateType() in schema



2019 Community Moderator ElectionHow to remove items from a list while iterating?Delete column from pandas DataFrame by column nameSelect rows from a DataFrame based on values in a column in pandasGet list from pandas DataFrame column headersDifference between DataFrame, Dataset, and RDD in SparkConvert json string to modified RDDSpark SQL(v2.0) UDAF in Scala returns empty stringEnforcing a schema on RDD while converting them to DataFrameHow do I apply schema with nullable = false to json readingpyspark load csv file into dataframe using a schema










1















I am creating a DataFrame from RDD and one of the value is a date. I don't know how to specify DateType() in schema.



Let me illustrate the problem at hand -



One way we can load the date into the DataFrame is by first specifying it as string and converting it to proper date using to_date() function.



from pyspark.sql.types import Row, StructType, StructField, StringType, IntegerType, DateType
from pyspark.sql.functions import col, to_date
values=sc.parallelize([(3,'2012-02-02'),(5,'2018-08-08')])
rdd= values.map(lambda t: Row(A=t[0],date=t[1]))

# Importing date as String in Schema
schema = StructType([StructField('A', IntegerType(), True), StructField('date', StringType(), True)])
df = sqlContext.createDataFrame(rdd, schema)

# Finally converting the string into date using to_date() function.
df = df.withColumn('date',to_date(col('date'), 'yyyy-MM-dd'))
df.show()
+---+----------+
| A| date|
+---+----------+
| 3|2012-02-02|
| 5|2018-08-08|
+---+----------+

df.printSchema()
root
|-- A: integer (nullable = true)
|-- date: date (nullable = true)


Is there a way, where we could use DateType() in the schema and avoid having to convert string to date explicitly?



Something like this -



values=sc.parallelize([(3,'2012-02-02'),(5,'2018-08-08')])
rdd= values.map(lambda t: Row(A=t[0],date=t[1]))
# Somewhere we would need to specify date format 'yyyy-MM-dd' too, don't know where though.
schema = StructType([StructField('A', DateType(), True), StructField('date', DateType(), True)])


UPDATE: As suggested by @user10465355, following code works -



import datetime
schema = StructType([
StructField('A', IntegerType(), True),
StructField('date', DateType(), True)
])
rdd= values.map(lambda t: Row(A=t[0],date=datetime.datetime.strptime(t[1], "%Y-%m-%d")))
df = sqlContext.createDataFrame(rdd, schema)
df.show()
+---+----------+
| A| date|
+---+----------+
| 3|2012-02-02|
| 5|2018-08-08|
+---+----------+
df.printSchema()
root
|-- A: integer (nullable = true)
|-- date: date (nullable = true)









share|improve this question




























    1















    I am creating a DataFrame from RDD and one of the value is a date. I don't know how to specify DateType() in schema.



    Let me illustrate the problem at hand -



    One way we can load the date into the DataFrame is by first specifying it as string and converting it to proper date using to_date() function.



    from pyspark.sql.types import Row, StructType, StructField, StringType, IntegerType, DateType
    from pyspark.sql.functions import col, to_date
    values=sc.parallelize([(3,'2012-02-02'),(5,'2018-08-08')])
    rdd= values.map(lambda t: Row(A=t[0],date=t[1]))

    # Importing date as String in Schema
    schema = StructType([StructField('A', IntegerType(), True), StructField('date', StringType(), True)])
    df = sqlContext.createDataFrame(rdd, schema)

    # Finally converting the string into date using to_date() function.
    df = df.withColumn('date',to_date(col('date'), 'yyyy-MM-dd'))
    df.show()
    +---+----------+
    | A| date|
    +---+----------+
    | 3|2012-02-02|
    | 5|2018-08-08|
    +---+----------+

    df.printSchema()
    root
    |-- A: integer (nullable = true)
    |-- date: date (nullable = true)


    Is there a way, where we could use DateType() in the schema and avoid having to convert string to date explicitly?



    Something like this -



    values=sc.parallelize([(3,'2012-02-02'),(5,'2018-08-08')])
    rdd= values.map(lambda t: Row(A=t[0],date=t[1]))
    # Somewhere we would need to specify date format 'yyyy-MM-dd' too, don't know where though.
    schema = StructType([StructField('A', DateType(), True), StructField('date', DateType(), True)])


    UPDATE: As suggested by @user10465355, following code works -



    import datetime
    schema = StructType([
    StructField('A', IntegerType(), True),
    StructField('date', DateType(), True)
    ])
    rdd= values.map(lambda t: Row(A=t[0],date=datetime.datetime.strptime(t[1], "%Y-%m-%d")))
    df = sqlContext.createDataFrame(rdd, schema)
    df.show()
    +---+----------+
    | A| date|
    +---+----------+
    | 3|2012-02-02|
    | 5|2018-08-08|
    +---+----------+
    df.printSchema()
    root
    |-- A: integer (nullable = true)
    |-- date: date (nullable = true)









    share|improve this question


























      1












      1








      1








      I am creating a DataFrame from RDD and one of the value is a date. I don't know how to specify DateType() in schema.



      Let me illustrate the problem at hand -



      One way we can load the date into the DataFrame is by first specifying it as string and converting it to proper date using to_date() function.



      from pyspark.sql.types import Row, StructType, StructField, StringType, IntegerType, DateType
      from pyspark.sql.functions import col, to_date
      values=sc.parallelize([(3,'2012-02-02'),(5,'2018-08-08')])
      rdd= values.map(lambda t: Row(A=t[0],date=t[1]))

      # Importing date as String in Schema
      schema = StructType([StructField('A', IntegerType(), True), StructField('date', StringType(), True)])
      df = sqlContext.createDataFrame(rdd, schema)

      # Finally converting the string into date using to_date() function.
      df = df.withColumn('date',to_date(col('date'), 'yyyy-MM-dd'))
      df.show()
      +---+----------+
      | A| date|
      +---+----------+
      | 3|2012-02-02|
      | 5|2018-08-08|
      +---+----------+

      df.printSchema()
      root
      |-- A: integer (nullable = true)
      |-- date: date (nullable = true)


      Is there a way, where we could use DateType() in the schema and avoid having to convert string to date explicitly?



      Something like this -



      values=sc.parallelize([(3,'2012-02-02'),(5,'2018-08-08')])
      rdd= values.map(lambda t: Row(A=t[0],date=t[1]))
      # Somewhere we would need to specify date format 'yyyy-MM-dd' too, don't know where though.
      schema = StructType([StructField('A', DateType(), True), StructField('date', DateType(), True)])


      UPDATE: As suggested by @user10465355, following code works -



      import datetime
      schema = StructType([
      StructField('A', IntegerType(), True),
      StructField('date', DateType(), True)
      ])
      rdd= values.map(lambda t: Row(A=t[0],date=datetime.datetime.strptime(t[1], "%Y-%m-%d")))
      df = sqlContext.createDataFrame(rdd, schema)
      df.show()
      +---+----------+
      | A| date|
      +---+----------+
      | 3|2012-02-02|
      | 5|2018-08-08|
      +---+----------+
      df.printSchema()
      root
      |-- A: integer (nullable = true)
      |-- date: date (nullable = true)









      share|improve this question
















      I am creating a DataFrame from RDD and one of the value is a date. I don't know how to specify DateType() in schema.



      Let me illustrate the problem at hand -



      One way we can load the date into the DataFrame is by first specifying it as string and converting it to proper date using to_date() function.



      from pyspark.sql.types import Row, StructType, StructField, StringType, IntegerType, DateType
      from pyspark.sql.functions import col, to_date
      values=sc.parallelize([(3,'2012-02-02'),(5,'2018-08-08')])
      rdd= values.map(lambda t: Row(A=t[0],date=t[1]))

      # Importing date as String in Schema
      schema = StructType([StructField('A', IntegerType(), True), StructField('date', StringType(), True)])
      df = sqlContext.createDataFrame(rdd, schema)

      # Finally converting the string into date using to_date() function.
      df = df.withColumn('date',to_date(col('date'), 'yyyy-MM-dd'))
      df.show()
      +---+----------+
      | A| date|
      +---+----------+
      | 3|2012-02-02|
      | 5|2018-08-08|
      +---+----------+

      df.printSchema()
      root
      |-- A: integer (nullable = true)
      |-- date: date (nullable = true)


      Is there a way, where we could use DateType() in the schema and avoid having to convert string to date explicitly?



      Something like this -



      values=sc.parallelize([(3,'2012-02-02'),(5,'2018-08-08')])
      rdd= values.map(lambda t: Row(A=t[0],date=t[1]))
      # Somewhere we would need to specify date format 'yyyy-MM-dd' too, don't know where though.
      schema = StructType([StructField('A', DateType(), True), StructField('date', DateType(), True)])


      UPDATE: As suggested by @user10465355, following code works -



      import datetime
      schema = StructType([
      StructField('A', IntegerType(), True),
      StructField('date', DateType(), True)
      ])
      rdd= values.map(lambda t: Row(A=t[0],date=datetime.datetime.strptime(t[1], "%Y-%m-%d")))
      df = sqlContext.createDataFrame(rdd, schema)
      df.show()
      +---+----------+
      | A| date|
      +---+----------+
      | 3|2012-02-02|
      | 5|2018-08-08|
      +---+----------+
      df.printSchema()
      root
      |-- A: integer (nullable = true)
      |-- date: date (nullable = true)






      python apache-spark pyspark






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 7 at 13:05







      cph_sto

















      asked Mar 7 at 7:51









      cph_stocph_sto

      2,3282422




      2,3282422






















          1 Answer
          1






          active

          oldest

          votes


















          3














          Long story short, schema used with RDD of external object is not intended to be used that way - declared types should reflect the actual state of the data, not the desired one.



          In other words to allow:



          schema = StructType([
          StructField('A', IntegerType(), True),
          StructField('date', DateType(), True)
          ])


          the data corresponding to date field should use datetime.date. So for example with your RDD[Tuple[int, str]]:



          import datetime

          spark.createDataFrame(
          # Since values from the question are just two element tuples
          # we can use mapValues to transform the "value"
          # but in general case you'll need map
          values.mapValues(datetime.date.fromisoformat),
          schema
          )


          The closest you can get to desired behavior is to convert data (RDD[Row]) with JSON reader, using dicts



          from pyspark.sql import Row

          spark.read.schema(schema).json(rdd.map(Row.asDict))


          or better explicit JSON dumps:



          import json
          spark.read.schema(schema).json(rdd.map(Row.asDict).map(json.dumps))


          but that's of course much more expensive than explicit casting, which BTW, is easy to automate in simple cases like the one you describe:



          from pyspark.sql.functions import col

          (spark
          .createDataFrame(values, ("a", "date"))
          .select([col(f.name).cast(f.dataType) for f in schema]))





          share|improve this answer

























          • Thank you so much for your efforts. Appreciated!! Your code, with slight modification, works :) I just made a post-update.

            – cph_sto
            Mar 7 at 13:06











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55038612%2fcreating-a-dataframe-from-rdd-while-specifying-datetype-in-schema%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          3














          Long story short, schema used with RDD of external object is not intended to be used that way - declared types should reflect the actual state of the data, not the desired one.



          In other words to allow:



          schema = StructType([
          StructField('A', IntegerType(), True),
          StructField('date', DateType(), True)
          ])


          the data corresponding to date field should use datetime.date. So for example with your RDD[Tuple[int, str]]:



          import datetime

          spark.createDataFrame(
          # Since values from the question are just two element tuples
          # we can use mapValues to transform the "value"
          # but in general case you'll need map
          values.mapValues(datetime.date.fromisoformat),
          schema
          )


          The closest you can get to desired behavior is to convert data (RDD[Row]) with JSON reader, using dicts



          from pyspark.sql import Row

          spark.read.schema(schema).json(rdd.map(Row.asDict))


          or better explicit JSON dumps:



          import json
          spark.read.schema(schema).json(rdd.map(Row.asDict).map(json.dumps))


          but that's of course much more expensive than explicit casting, which BTW, is easy to automate in simple cases like the one you describe:



          from pyspark.sql.functions import col

          (spark
          .createDataFrame(values, ("a", "date"))
          .select([col(f.name).cast(f.dataType) for f in schema]))





          share|improve this answer

























          • Thank you so much for your efforts. Appreciated!! Your code, with slight modification, works :) I just made a post-update.

            – cph_sto
            Mar 7 at 13:06
















          3














          Long story short, schema used with RDD of external object is not intended to be used that way - declared types should reflect the actual state of the data, not the desired one.



          In other words to allow:



          schema = StructType([
          StructField('A', IntegerType(), True),
          StructField('date', DateType(), True)
          ])


          the data corresponding to date field should use datetime.date. So for example with your RDD[Tuple[int, str]]:



          import datetime

          spark.createDataFrame(
          # Since values from the question are just two element tuples
          # we can use mapValues to transform the "value"
          # but in general case you'll need map
          values.mapValues(datetime.date.fromisoformat),
          schema
          )


          The closest you can get to desired behavior is to convert data (RDD[Row]) with JSON reader, using dicts



          from pyspark.sql import Row

          spark.read.schema(schema).json(rdd.map(Row.asDict))


          or better explicit JSON dumps:



          import json
          spark.read.schema(schema).json(rdd.map(Row.asDict).map(json.dumps))


          but that's of course much more expensive than explicit casting, which BTW, is easy to automate in simple cases like the one you describe:



          from pyspark.sql.functions import col

          (spark
          .createDataFrame(values, ("a", "date"))
          .select([col(f.name).cast(f.dataType) for f in schema]))





          share|improve this answer

























          • Thank you so much for your efforts. Appreciated!! Your code, with slight modification, works :) I just made a post-update.

            – cph_sto
            Mar 7 at 13:06














          3












          3








          3







          Long story short, schema used with RDD of external object is not intended to be used that way - declared types should reflect the actual state of the data, not the desired one.



          In other words to allow:



          schema = StructType([
          StructField('A', IntegerType(), True),
          StructField('date', DateType(), True)
          ])


          the data corresponding to date field should use datetime.date. So for example with your RDD[Tuple[int, str]]:



          import datetime

          spark.createDataFrame(
          # Since values from the question are just two element tuples
          # we can use mapValues to transform the "value"
          # but in general case you'll need map
          values.mapValues(datetime.date.fromisoformat),
          schema
          )


          The closest you can get to desired behavior is to convert data (RDD[Row]) with JSON reader, using dicts



          from pyspark.sql import Row

          spark.read.schema(schema).json(rdd.map(Row.asDict))


          or better explicit JSON dumps:



          import json
          spark.read.schema(schema).json(rdd.map(Row.asDict).map(json.dumps))


          but that's of course much more expensive than explicit casting, which BTW, is easy to automate in simple cases like the one you describe:



          from pyspark.sql.functions import col

          (spark
          .createDataFrame(values, ("a", "date"))
          .select([col(f.name).cast(f.dataType) for f in schema]))





          share|improve this answer















          Long story short, schema used with RDD of external object is not intended to be used that way - declared types should reflect the actual state of the data, not the desired one.



          In other words to allow:



          schema = StructType([
          StructField('A', IntegerType(), True),
          StructField('date', DateType(), True)
          ])


          the data corresponding to date field should use datetime.date. So for example with your RDD[Tuple[int, str]]:



          import datetime

          spark.createDataFrame(
          # Since values from the question are just two element tuples
          # we can use mapValues to transform the "value"
          # but in general case you'll need map
          values.mapValues(datetime.date.fromisoformat),
          schema
          )


          The closest you can get to desired behavior is to convert data (RDD[Row]) with JSON reader, using dicts



          from pyspark.sql import Row

          spark.read.schema(schema).json(rdd.map(Row.asDict))


          or better explicit JSON dumps:



          import json
          spark.read.schema(schema).json(rdd.map(Row.asDict).map(json.dumps))


          but that's of course much more expensive than explicit casting, which BTW, is easy to automate in simple cases like the one you describe:



          from pyspark.sql.functions import col

          (spark
          .createDataFrame(values, ("a", "date"))
          .select([col(f.name).cast(f.dataType) for f in schema]))






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Mar 7 at 13:13

























          answered Mar 7 at 10:48









          user10465355user10465355

          2,1172419




          2,1172419












          • Thank you so much for your efforts. Appreciated!! Your code, with slight modification, works :) I just made a post-update.

            – cph_sto
            Mar 7 at 13:06


















          • Thank you so much for your efforts. Appreciated!! Your code, with slight modification, works :) I just made a post-update.

            – cph_sto
            Mar 7 at 13:06

















          Thank you so much for your efforts. Appreciated!! Your code, with slight modification, works :) I just made a post-update.

          – cph_sto
          Mar 7 at 13:06






          Thank you so much for your efforts. Appreciated!! Your code, with slight modification, works :) I just made a post-update.

          – cph_sto
          Mar 7 at 13:06




















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55038612%2fcreating-a-dataframe-from-rdd-while-specifying-datetype-in-schema%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Can't initialize raids on a new ASUS Prime B360M-A motherboard2019 Community Moderator ElectionSimilar to RAID config yet more like mirroring solution?Can't get motherboard serial numberWhy does the BIOS entry point start with a WBINVD instruction?UEFI performance Asus Maximus V Extreme

          Identity Server 4 is not redirecting to Angular app after login2019 Community Moderator ElectionIdentity Server 4 and dockerIdentityserver implicit flow unauthorized_clientIdentityServer Hybrid Flow - Access Token is null after user successful loginIdentity Server to MVC client : Page Redirect After loginLogin with Steam OpenId(oidc-client-js)Identity Server 4+.NET Core 2.0 + IdentityIdentityServer4 post-login redirect not working in Edge browserCall to IdentityServer4 generates System.NullReferenceException: Object reference not set to an instance of an objectIdentityServer4 without HTTPS not workingHow to get Authorization code from identity server without login form

          2005 Ahvaz unrest Contents Background Causes Casualties Aftermath See also References Navigation menue"At Least 10 Are Killed by Bombs in Iran""Iran"Archived"Arab-Iranians in Iran to make April 15 'Day of Fury'"State of Mind, State of Order: Reactions to Ethnic Unrest in the Islamic Republic of Iran.10.1111/j.1754-9469.2008.00028.x"Iran hangs Arab separatists"Iran Overview from ArchivedConstitution of the Islamic Republic of Iran"Tehran puzzled by forged 'riots' letter""Iran and its minorities: Down in the second class""Iran: Handling Of Ahvaz Unrest Could End With Televised Confessions""Bombings Rock Iran Ahead of Election""Five die in Iran ethnic clashes""Iran: Need for restraint as anniversary of unrest in Khuzestan approaches"Archived"Iranian Sunni protesters killed in clashes with security forces"Archived