Partition Athena query by S3 created date The Next CEO of Stack OverflowSelecting specific files for athena(AWS) Athena: Query Results seem too shortFailure to repair partitions in Amazon AthenaCreate a external table aws athena for a CSVAthena Partition s3 access logsHow do I query heterogeneous JSON data in S3?Porting partially-relational S3 data into Redshift via Spark and GlueBest way to deduplicate from firehoseAutomate external hive/athena table partition managementamazon athena create request with partitionsAWS Athena create table and partition

Is there a difference between "Fahrstuhl" and "Aufzug"

Where do students learn to solve polynomial equations these days?

Yu-Gi-Oh cards in Python 3

Is it possible to use a NPN BJT as switch, from single power source?

Is micro rebar a better way to reinforce concrete than rebar?

The past simple of "gaslight" – "gaslighted" or "gaslit"?

Running a General Election and the European Elections together

Is it possible to replace duplicates of a character with one character using tr

How to invert MapIndexed on a ragged structure? How to construct a tree from rules?

How to get from Geneva Airport to Metabief, Doubs, France by public transport?

Does Germany produce more waste than the US?

Why is the US ranked as #45 in Press Freedom ratings, despite its extremely permissive free speech laws?

How do I align (1) and (2)?

What connection does MS Office have to Netscape Navigator?

Why isn't the Mueller report being released completely and unredacted?

Do I need to write [sic] when a number is less than 10 but isn't written out?

Poetry, calligrams and TikZ/PStricks challenge

I believe this to be a fraud - hired, then asked to cash check and send cash as Bitcoin

Is it convenient to ask the journal's editor for two additional days to complete a review?

0 rank tensor vs 1D vector

Make solar eclipses exceedingly rare, but still have new moons

Find non-case sensitive string in a mixed list of elements?

Is it okay to majorly distort historical facts while writing a fiction story?

How to check if all elements of 1 list are in the *same quantity* and in any order, in the list2?



Partition Athena query by S3 created date



The Next CEO of Stack OverflowSelecting specific files for athena(AWS) Athena: Query Results seem too shortFailure to repair partitions in Amazon AthenaCreate a external table aws athena for a CSVAthena Partition s3 access logsHow do I query heterogeneous JSON data in S3?Porting partially-relational S3 data into Redshift via Spark and GlueBest way to deduplicate from firehoseAutomate external hive/athena table partition managementamazon athena create request with partitionsAWS Athena create table and partition










0















I have a S3 bucket with ~ 70 million JSONs (~ 15TB) and an athena table to query by timestamp and some other keys definied in the JSON.



It is guaranteed, that the timestamp in the JSON is more or less equal to the S3-createdDate of the JSON (or at least equal enough for the purpose of my query)



Can I somehow improve querying-performance (and cost) by adding the createddate as something like a "partition" - which I unterstand seems only to be possible for prefixes/folders?



edit:
I currently simulate that by using the S3 inventory CSV to pre-filter by createdDate and then download all JSONs and do the rest of the filtering, but I'd like to do that completely inside athena, if possible










share|improve this question




























    0















    I have a S3 bucket with ~ 70 million JSONs (~ 15TB) and an athena table to query by timestamp and some other keys definied in the JSON.



    It is guaranteed, that the timestamp in the JSON is more or less equal to the S3-createdDate of the JSON (or at least equal enough for the purpose of my query)



    Can I somehow improve querying-performance (and cost) by adding the createddate as something like a "partition" - which I unterstand seems only to be possible for prefixes/folders?



    edit:
    I currently simulate that by using the S3 inventory CSV to pre-filter by createdDate and then download all JSONs and do the rest of the filtering, but I'd like to do that completely inside athena, if possible










    share|improve this question


























      0












      0








      0


      1






      I have a S3 bucket with ~ 70 million JSONs (~ 15TB) and an athena table to query by timestamp and some other keys definied in the JSON.



      It is guaranteed, that the timestamp in the JSON is more or less equal to the S3-createdDate of the JSON (or at least equal enough for the purpose of my query)



      Can I somehow improve querying-performance (and cost) by adding the createddate as something like a "partition" - which I unterstand seems only to be possible for prefixes/folders?



      edit:
      I currently simulate that by using the S3 inventory CSV to pre-filter by createdDate and then download all JSONs and do the rest of the filtering, but I'd like to do that completely inside athena, if possible










      share|improve this question
















      I have a S3 bucket with ~ 70 million JSONs (~ 15TB) and an athena table to query by timestamp and some other keys definied in the JSON.



      It is guaranteed, that the timestamp in the JSON is more or less equal to the S3-createdDate of the JSON (or at least equal enough for the purpose of my query)



      Can I somehow improve querying-performance (and cost) by adding the createddate as something like a "partition" - which I unterstand seems only to be possible for prefixes/folders?



      edit:
      I currently simulate that by using the S3 inventory CSV to pre-filter by createdDate and then download all JSONs and do the rest of the filtering, but I'd like to do that completely inside athena, if possible







      amazon-s3 amazon-athena aws-glue






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 8 at 16:23







      waquner

















      asked Mar 8 at 15:56









      waqunerwaquner

      1814




      1814






















          1 Answer
          1






          active

          oldest

          votes


















          1














          There is no way to make Athena use things like S3 object metadata for query planning. The only way to make Athena skip reading objects is to organize the objects in a way that makes it possible to set up a partitioned table, and then query with filters on the partition keys.



          It sounds like you have an idea of how partitioning in Athena works, and I assume there is a reason that you are not using it. However, for the benefit of others with similar problems coming across this question I'll start by explaining what you can do if you can change the way the objects are organized. I'll give an alternative suggestion at the end, you may want to jump straight to that.



          I would suggest you organize the JSON objects using prefixes that contain some part of the timestamps of the objects. Exactly how much depends on the way you query the data. You don't want it too granular and not too coarse. Making it too granular will make Athena spend more time listing files on S3, making it too coarse will make it read too many files. If the most common time period of queries is a month, that is a good granularity, if the most common period is a couple of days then day is probably better.



          For example, if day is the best granularity for your dataset you could organize the objects using keys like this:



          s3://some-bucket/data/2019-03-07/object0.json
          s3://some-bucket/data/2019-03-07/object1.json
          s3://some-bucket/data/2019-03-08/object0.json
          s3://some-bucket/data/2019-03-08/object1.json
          s3://some-bucket/data/2019-03-08/object2.json


          You can also use a Hive-style partitioning scheme, which is what other tools like Glue, Spark, and Hive expect, so unless you have reasons not to it can save you grief in the future:



          s3://some-bucket/data/created_date=2019-03-07/object0.json
          s3://some-bucket/data/created_date=2019-03-07/object1.json
          s3://some-bucket/data/created_date=2019-03-08/object0.json


          I chose the name created_date here, I don't know what would be a good name for your data. You can use just date, but remember to always quote it (and quote it in different ways in DML and DDL…) since it's a reserved word.



          Then you create a partitioned table:



          CREATE TABLE my_data (
          column0 string,
          column1 int
          )
          PARTITIONED BY (created_date date)
          ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
          STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
          OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
          LOCATION 's3://some-bucket/data/'
          TBLPROPERTIES ('has_encrypted_data'='false')


          Some guides will then tell you to run MSCK REPAIR TABLE to load the partitions for the table. If you use Hive-style partitioning (i.e. …/created_date=2019-03-08/…) you can do this, but it will take a long time and I wouldn't recommend it. You can do a much better job of it by manually adding the partitions, which you do like this:



          ALTER TABLE my_data
          ADD PARTITION (created_date = '2019-03-07') LOCATION 's3://some-bucket/data/created_date=2019-03-07/'
          ADD PARTITION (created_date = '2019-03-08') LOCATION 's3://some-bucket/data/created_date=2019-03-08/'


          Finally, when you query the table make sure to include the created_date column to give Athena the information it needs to read only the objects that are relevant for the query:



          SELECT COUNT(*)
          FROM my_data
          WHERE created_date >= DATE '2019-03-07'


          You can verify that the query will be cheaper by observing the difference in the data scanned when you change from for example created_date >= DATE '2019-03-07' to created_date = DATE '2019-03-07'.




          If you are not able to change the way the objects are organized on S3, there is a poorly documented feature that makes it possible to create a partitioned table even when you can't change the data objects. What you do is you create the same prefixes as I suggest above, but instead of moving the JSON objects into this structure you put a file called symlink.txt in each partition's prefix:



          s3://some-bucket/data/created_date=2019-03-07/symlink.txt
          s3://some-bucket/data/created_date=2019-03-08/symlink.txt


          In each symlink.txt you put the full S3 URI of the files that you want to include in that partition. For example, in the first file you could put:



          s3://data-bucket/data/object0.json
          s3://data-bucket/data/object1.json


          and the second file:



          s3://data-bucket/data/object2.json
          s3://data-bucket/data/object3.json
          s3://data-bucket/data/object4.json


          Then you create a table that looks very similar to the table above, but with one small difference:



          CREATE TABLE my_data (
          column0 string,
          column1 int
          )
          PARTITIONED BY (created_date date)
          ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
          STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
          OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
          LOCATION 's3://some-bucket/data/'
          TBLPROPERTIES ('has_encrypted_data'='false')


          Notice the value of the INPUTFORMAT property.



          You add partitions just like you do for any partitioned table:



          ALTER TABLE my_data
          ADD PARTITION (created_date = '2019-03-07') LOCATION 's3://some-bucket/data/created_date=2019-03-07/'
          ADD PARTITION (created_date = '2019-03-08') LOCATION 's3://some-bucket/data/created_date=2019-03-08/'


          The only Athena-related documentation of this feature that I have come across for this is the S3 Inventory docs for integrating with Athena.






          share|improve this answer























            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55066760%2fpartition-athena-query-by-s3-created-date%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            There is no way to make Athena use things like S3 object metadata for query planning. The only way to make Athena skip reading objects is to organize the objects in a way that makes it possible to set up a partitioned table, and then query with filters on the partition keys.



            It sounds like you have an idea of how partitioning in Athena works, and I assume there is a reason that you are not using it. However, for the benefit of others with similar problems coming across this question I'll start by explaining what you can do if you can change the way the objects are organized. I'll give an alternative suggestion at the end, you may want to jump straight to that.



            I would suggest you organize the JSON objects using prefixes that contain some part of the timestamps of the objects. Exactly how much depends on the way you query the data. You don't want it too granular and not too coarse. Making it too granular will make Athena spend more time listing files on S3, making it too coarse will make it read too many files. If the most common time period of queries is a month, that is a good granularity, if the most common period is a couple of days then day is probably better.



            For example, if day is the best granularity for your dataset you could organize the objects using keys like this:



            s3://some-bucket/data/2019-03-07/object0.json
            s3://some-bucket/data/2019-03-07/object1.json
            s3://some-bucket/data/2019-03-08/object0.json
            s3://some-bucket/data/2019-03-08/object1.json
            s3://some-bucket/data/2019-03-08/object2.json


            You can also use a Hive-style partitioning scheme, which is what other tools like Glue, Spark, and Hive expect, so unless you have reasons not to it can save you grief in the future:



            s3://some-bucket/data/created_date=2019-03-07/object0.json
            s3://some-bucket/data/created_date=2019-03-07/object1.json
            s3://some-bucket/data/created_date=2019-03-08/object0.json


            I chose the name created_date here, I don't know what would be a good name for your data. You can use just date, but remember to always quote it (and quote it in different ways in DML and DDL…) since it's a reserved word.



            Then you create a partitioned table:



            CREATE TABLE my_data (
            column0 string,
            column1 int
            )
            PARTITIONED BY (created_date date)
            ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
            STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
            OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
            LOCATION 's3://some-bucket/data/'
            TBLPROPERTIES ('has_encrypted_data'='false')


            Some guides will then tell you to run MSCK REPAIR TABLE to load the partitions for the table. If you use Hive-style partitioning (i.e. …/created_date=2019-03-08/…) you can do this, but it will take a long time and I wouldn't recommend it. You can do a much better job of it by manually adding the partitions, which you do like this:



            ALTER TABLE my_data
            ADD PARTITION (created_date = '2019-03-07') LOCATION 's3://some-bucket/data/created_date=2019-03-07/'
            ADD PARTITION (created_date = '2019-03-08') LOCATION 's3://some-bucket/data/created_date=2019-03-08/'


            Finally, when you query the table make sure to include the created_date column to give Athena the information it needs to read only the objects that are relevant for the query:



            SELECT COUNT(*)
            FROM my_data
            WHERE created_date >= DATE '2019-03-07'


            You can verify that the query will be cheaper by observing the difference in the data scanned when you change from for example created_date >= DATE '2019-03-07' to created_date = DATE '2019-03-07'.




            If you are not able to change the way the objects are organized on S3, there is a poorly documented feature that makes it possible to create a partitioned table even when you can't change the data objects. What you do is you create the same prefixes as I suggest above, but instead of moving the JSON objects into this structure you put a file called symlink.txt in each partition's prefix:



            s3://some-bucket/data/created_date=2019-03-07/symlink.txt
            s3://some-bucket/data/created_date=2019-03-08/symlink.txt


            In each symlink.txt you put the full S3 URI of the files that you want to include in that partition. For example, in the first file you could put:



            s3://data-bucket/data/object0.json
            s3://data-bucket/data/object1.json


            and the second file:



            s3://data-bucket/data/object2.json
            s3://data-bucket/data/object3.json
            s3://data-bucket/data/object4.json


            Then you create a table that looks very similar to the table above, but with one small difference:



            CREATE TABLE my_data (
            column0 string,
            column1 int
            )
            PARTITIONED BY (created_date date)
            ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
            STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
            OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
            LOCATION 's3://some-bucket/data/'
            TBLPROPERTIES ('has_encrypted_data'='false')


            Notice the value of the INPUTFORMAT property.



            You add partitions just like you do for any partitioned table:



            ALTER TABLE my_data
            ADD PARTITION (created_date = '2019-03-07') LOCATION 's3://some-bucket/data/created_date=2019-03-07/'
            ADD PARTITION (created_date = '2019-03-08') LOCATION 's3://some-bucket/data/created_date=2019-03-08/'


            The only Athena-related documentation of this feature that I have come across for this is the S3 Inventory docs for integrating with Athena.






            share|improve this answer



























              1














              There is no way to make Athena use things like S3 object metadata for query planning. The only way to make Athena skip reading objects is to organize the objects in a way that makes it possible to set up a partitioned table, and then query with filters on the partition keys.



              It sounds like you have an idea of how partitioning in Athena works, and I assume there is a reason that you are not using it. However, for the benefit of others with similar problems coming across this question I'll start by explaining what you can do if you can change the way the objects are organized. I'll give an alternative suggestion at the end, you may want to jump straight to that.



              I would suggest you organize the JSON objects using prefixes that contain some part of the timestamps of the objects. Exactly how much depends on the way you query the data. You don't want it too granular and not too coarse. Making it too granular will make Athena spend more time listing files on S3, making it too coarse will make it read too many files. If the most common time period of queries is a month, that is a good granularity, if the most common period is a couple of days then day is probably better.



              For example, if day is the best granularity for your dataset you could organize the objects using keys like this:



              s3://some-bucket/data/2019-03-07/object0.json
              s3://some-bucket/data/2019-03-07/object1.json
              s3://some-bucket/data/2019-03-08/object0.json
              s3://some-bucket/data/2019-03-08/object1.json
              s3://some-bucket/data/2019-03-08/object2.json


              You can also use a Hive-style partitioning scheme, which is what other tools like Glue, Spark, and Hive expect, so unless you have reasons not to it can save you grief in the future:



              s3://some-bucket/data/created_date=2019-03-07/object0.json
              s3://some-bucket/data/created_date=2019-03-07/object1.json
              s3://some-bucket/data/created_date=2019-03-08/object0.json


              I chose the name created_date here, I don't know what would be a good name for your data. You can use just date, but remember to always quote it (and quote it in different ways in DML and DDL…) since it's a reserved word.



              Then you create a partitioned table:



              CREATE TABLE my_data (
              column0 string,
              column1 int
              )
              PARTITIONED BY (created_date date)
              ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
              STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
              OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
              LOCATION 's3://some-bucket/data/'
              TBLPROPERTIES ('has_encrypted_data'='false')


              Some guides will then tell you to run MSCK REPAIR TABLE to load the partitions for the table. If you use Hive-style partitioning (i.e. …/created_date=2019-03-08/…) you can do this, but it will take a long time and I wouldn't recommend it. You can do a much better job of it by manually adding the partitions, which you do like this:



              ALTER TABLE my_data
              ADD PARTITION (created_date = '2019-03-07') LOCATION 's3://some-bucket/data/created_date=2019-03-07/'
              ADD PARTITION (created_date = '2019-03-08') LOCATION 's3://some-bucket/data/created_date=2019-03-08/'


              Finally, when you query the table make sure to include the created_date column to give Athena the information it needs to read only the objects that are relevant for the query:



              SELECT COUNT(*)
              FROM my_data
              WHERE created_date >= DATE '2019-03-07'


              You can verify that the query will be cheaper by observing the difference in the data scanned when you change from for example created_date >= DATE '2019-03-07' to created_date = DATE '2019-03-07'.




              If you are not able to change the way the objects are organized on S3, there is a poorly documented feature that makes it possible to create a partitioned table even when you can't change the data objects. What you do is you create the same prefixes as I suggest above, but instead of moving the JSON objects into this structure you put a file called symlink.txt in each partition's prefix:



              s3://some-bucket/data/created_date=2019-03-07/symlink.txt
              s3://some-bucket/data/created_date=2019-03-08/symlink.txt


              In each symlink.txt you put the full S3 URI of the files that you want to include in that partition. For example, in the first file you could put:



              s3://data-bucket/data/object0.json
              s3://data-bucket/data/object1.json


              and the second file:



              s3://data-bucket/data/object2.json
              s3://data-bucket/data/object3.json
              s3://data-bucket/data/object4.json


              Then you create a table that looks very similar to the table above, but with one small difference:



              CREATE TABLE my_data (
              column0 string,
              column1 int
              )
              PARTITIONED BY (created_date date)
              ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
              STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
              OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
              LOCATION 's3://some-bucket/data/'
              TBLPROPERTIES ('has_encrypted_data'='false')


              Notice the value of the INPUTFORMAT property.



              You add partitions just like you do for any partitioned table:



              ALTER TABLE my_data
              ADD PARTITION (created_date = '2019-03-07') LOCATION 's3://some-bucket/data/created_date=2019-03-07/'
              ADD PARTITION (created_date = '2019-03-08') LOCATION 's3://some-bucket/data/created_date=2019-03-08/'


              The only Athena-related documentation of this feature that I have come across for this is the S3 Inventory docs for integrating with Athena.






              share|improve this answer

























                1












                1








                1







                There is no way to make Athena use things like S3 object metadata for query planning. The only way to make Athena skip reading objects is to organize the objects in a way that makes it possible to set up a partitioned table, and then query with filters on the partition keys.



                It sounds like you have an idea of how partitioning in Athena works, and I assume there is a reason that you are not using it. However, for the benefit of others with similar problems coming across this question I'll start by explaining what you can do if you can change the way the objects are organized. I'll give an alternative suggestion at the end, you may want to jump straight to that.



                I would suggest you organize the JSON objects using prefixes that contain some part of the timestamps of the objects. Exactly how much depends on the way you query the data. You don't want it too granular and not too coarse. Making it too granular will make Athena spend more time listing files on S3, making it too coarse will make it read too many files. If the most common time period of queries is a month, that is a good granularity, if the most common period is a couple of days then day is probably better.



                For example, if day is the best granularity for your dataset you could organize the objects using keys like this:



                s3://some-bucket/data/2019-03-07/object0.json
                s3://some-bucket/data/2019-03-07/object1.json
                s3://some-bucket/data/2019-03-08/object0.json
                s3://some-bucket/data/2019-03-08/object1.json
                s3://some-bucket/data/2019-03-08/object2.json


                You can also use a Hive-style partitioning scheme, which is what other tools like Glue, Spark, and Hive expect, so unless you have reasons not to it can save you grief in the future:



                s3://some-bucket/data/created_date=2019-03-07/object0.json
                s3://some-bucket/data/created_date=2019-03-07/object1.json
                s3://some-bucket/data/created_date=2019-03-08/object0.json


                I chose the name created_date here, I don't know what would be a good name for your data. You can use just date, but remember to always quote it (and quote it in different ways in DML and DDL…) since it's a reserved word.



                Then you create a partitioned table:



                CREATE TABLE my_data (
                column0 string,
                column1 int
                )
                PARTITIONED BY (created_date date)
                ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
                STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
                OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
                LOCATION 's3://some-bucket/data/'
                TBLPROPERTIES ('has_encrypted_data'='false')


                Some guides will then tell you to run MSCK REPAIR TABLE to load the partitions for the table. If you use Hive-style partitioning (i.e. …/created_date=2019-03-08/…) you can do this, but it will take a long time and I wouldn't recommend it. You can do a much better job of it by manually adding the partitions, which you do like this:



                ALTER TABLE my_data
                ADD PARTITION (created_date = '2019-03-07') LOCATION 's3://some-bucket/data/created_date=2019-03-07/'
                ADD PARTITION (created_date = '2019-03-08') LOCATION 's3://some-bucket/data/created_date=2019-03-08/'


                Finally, when you query the table make sure to include the created_date column to give Athena the information it needs to read only the objects that are relevant for the query:



                SELECT COUNT(*)
                FROM my_data
                WHERE created_date >= DATE '2019-03-07'


                You can verify that the query will be cheaper by observing the difference in the data scanned when you change from for example created_date >= DATE '2019-03-07' to created_date = DATE '2019-03-07'.




                If you are not able to change the way the objects are organized on S3, there is a poorly documented feature that makes it possible to create a partitioned table even when you can't change the data objects. What you do is you create the same prefixes as I suggest above, but instead of moving the JSON objects into this structure you put a file called symlink.txt in each partition's prefix:



                s3://some-bucket/data/created_date=2019-03-07/symlink.txt
                s3://some-bucket/data/created_date=2019-03-08/symlink.txt


                In each symlink.txt you put the full S3 URI of the files that you want to include in that partition. For example, in the first file you could put:



                s3://data-bucket/data/object0.json
                s3://data-bucket/data/object1.json


                and the second file:



                s3://data-bucket/data/object2.json
                s3://data-bucket/data/object3.json
                s3://data-bucket/data/object4.json


                Then you create a table that looks very similar to the table above, but with one small difference:



                CREATE TABLE my_data (
                column0 string,
                column1 int
                )
                PARTITIONED BY (created_date date)
                ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
                STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
                OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
                LOCATION 's3://some-bucket/data/'
                TBLPROPERTIES ('has_encrypted_data'='false')


                Notice the value of the INPUTFORMAT property.



                You add partitions just like you do for any partitioned table:



                ALTER TABLE my_data
                ADD PARTITION (created_date = '2019-03-07') LOCATION 's3://some-bucket/data/created_date=2019-03-07/'
                ADD PARTITION (created_date = '2019-03-08') LOCATION 's3://some-bucket/data/created_date=2019-03-08/'


                The only Athena-related documentation of this feature that I have come across for this is the S3 Inventory docs for integrating with Athena.






                share|improve this answer













                There is no way to make Athena use things like S3 object metadata for query planning. The only way to make Athena skip reading objects is to organize the objects in a way that makes it possible to set up a partitioned table, and then query with filters on the partition keys.



                It sounds like you have an idea of how partitioning in Athena works, and I assume there is a reason that you are not using it. However, for the benefit of others with similar problems coming across this question I'll start by explaining what you can do if you can change the way the objects are organized. I'll give an alternative suggestion at the end, you may want to jump straight to that.



                I would suggest you organize the JSON objects using prefixes that contain some part of the timestamps of the objects. Exactly how much depends on the way you query the data. You don't want it too granular and not too coarse. Making it too granular will make Athena spend more time listing files on S3, making it too coarse will make it read too many files. If the most common time period of queries is a month, that is a good granularity, if the most common period is a couple of days then day is probably better.



                For example, if day is the best granularity for your dataset you could organize the objects using keys like this:



                s3://some-bucket/data/2019-03-07/object0.json
                s3://some-bucket/data/2019-03-07/object1.json
                s3://some-bucket/data/2019-03-08/object0.json
                s3://some-bucket/data/2019-03-08/object1.json
                s3://some-bucket/data/2019-03-08/object2.json


                You can also use a Hive-style partitioning scheme, which is what other tools like Glue, Spark, and Hive expect, so unless you have reasons not to it can save you grief in the future:



                s3://some-bucket/data/created_date=2019-03-07/object0.json
                s3://some-bucket/data/created_date=2019-03-07/object1.json
                s3://some-bucket/data/created_date=2019-03-08/object0.json


                I chose the name created_date here, I don't know what would be a good name for your data. You can use just date, but remember to always quote it (and quote it in different ways in DML and DDL…) since it's a reserved word.



                Then you create a partitioned table:



                CREATE TABLE my_data (
                column0 string,
                column1 int
                )
                PARTITIONED BY (created_date date)
                ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
                STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
                OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
                LOCATION 's3://some-bucket/data/'
                TBLPROPERTIES ('has_encrypted_data'='false')


                Some guides will then tell you to run MSCK REPAIR TABLE to load the partitions for the table. If you use Hive-style partitioning (i.e. …/created_date=2019-03-08/…) you can do this, but it will take a long time and I wouldn't recommend it. You can do a much better job of it by manually adding the partitions, which you do like this:



                ALTER TABLE my_data
                ADD PARTITION (created_date = '2019-03-07') LOCATION 's3://some-bucket/data/created_date=2019-03-07/'
                ADD PARTITION (created_date = '2019-03-08') LOCATION 's3://some-bucket/data/created_date=2019-03-08/'


                Finally, when you query the table make sure to include the created_date column to give Athena the information it needs to read only the objects that are relevant for the query:



                SELECT COUNT(*)
                FROM my_data
                WHERE created_date >= DATE '2019-03-07'


                You can verify that the query will be cheaper by observing the difference in the data scanned when you change from for example created_date >= DATE '2019-03-07' to created_date = DATE '2019-03-07'.




                If you are not able to change the way the objects are organized on S3, there is a poorly documented feature that makes it possible to create a partitioned table even when you can't change the data objects. What you do is you create the same prefixes as I suggest above, but instead of moving the JSON objects into this structure you put a file called symlink.txt in each partition's prefix:



                s3://some-bucket/data/created_date=2019-03-07/symlink.txt
                s3://some-bucket/data/created_date=2019-03-08/symlink.txt


                In each symlink.txt you put the full S3 URI of the files that you want to include in that partition. For example, in the first file you could put:



                s3://data-bucket/data/object0.json
                s3://data-bucket/data/object1.json


                and the second file:



                s3://data-bucket/data/object2.json
                s3://data-bucket/data/object3.json
                s3://data-bucket/data/object4.json


                Then you create a table that looks very similar to the table above, but with one small difference:



                CREATE TABLE my_data (
                column0 string,
                column1 int
                )
                PARTITIONED BY (created_date date)
                ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
                STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
                OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
                LOCATION 's3://some-bucket/data/'
                TBLPROPERTIES ('has_encrypted_data'='false')


                Notice the value of the INPUTFORMAT property.



                You add partitions just like you do for any partitioned table:



                ALTER TABLE my_data
                ADD PARTITION (created_date = '2019-03-07') LOCATION 's3://some-bucket/data/created_date=2019-03-07/'
                ADD PARTITION (created_date = '2019-03-08') LOCATION 's3://some-bucket/data/created_date=2019-03-08/'


                The only Athena-related documentation of this feature that I have come across for this is the S3 Inventory docs for integrating with Athena.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Mar 8 at 18:53









                TheoTheo

                114k14117156




                114k14117156





























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55066760%2fpartition-athena-query-by-s3-created-date%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Can't initialize raids on a new ASUS Prime B360M-A motherboard2019 Community Moderator ElectionSimilar to RAID config yet more like mirroring solution?Can't get motherboard serial numberWhy does the BIOS entry point start with a WBINVD instruction?UEFI performance Asus Maximus V Extreme

                    Identity Server 4 is not redirecting to Angular app after login2019 Community Moderator ElectionIdentity Server 4 and dockerIdentityserver implicit flow unauthorized_clientIdentityServer Hybrid Flow - Access Token is null after user successful loginIdentity Server to MVC client : Page Redirect After loginLogin with Steam OpenId(oidc-client-js)Identity Server 4+.NET Core 2.0 + IdentityIdentityServer4 post-login redirect not working in Edge browserCall to IdentityServer4 generates System.NullReferenceException: Object reference not set to an instance of an objectIdentityServer4 without HTTPS not workingHow to get Authorization code from identity server without login form

                    2005 Ahvaz unrest Contents Background Causes Casualties Aftermath See also References Navigation menue"At Least 10 Are Killed by Bombs in Iran""Iran"Archived"Arab-Iranians in Iran to make April 15 'Day of Fury'"State of Mind, State of Order: Reactions to Ethnic Unrest in the Islamic Republic of Iran.10.1111/j.1754-9469.2008.00028.x"Iran hangs Arab separatists"Iran Overview from ArchivedConstitution of the Islamic Republic of Iran"Tehran puzzled by forged 'riots' letter""Iran and its minorities: Down in the second class""Iran: Handling Of Ahvaz Unrest Could End With Televised Confessions""Bombings Rock Iran Ahead of Election""Five die in Iran ethnic clashes""Iran: Need for restraint as anniversary of unrest in Khuzestan approaches"Archived"Iranian Sunni protesters killed in clashes with security forces"Archived