Datasets:

Modalities:
Text
ArXiv:
Libraries:
Datasets
Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
id
stringlengths
3
9
sections
sequence
sec_text
sequence
extractive_keyphrases
sequence
abstractive_keyphrases
sequence
sec_bio_tags
sequence
6271257
[ "introduction", "setup of the experiment", "gathering input sketches", "defining sketch/image pairs", "interaction for gathering the ratings", "anchoring", "choice of stimulus duration", "evaluation", "rank correlation", "tied ranks", "measuring statistical significance", "benchmarking sbir sy...
[ [ "F", "OR", "most", "image", "databases,", "browsing", "as", "a", "means", "of", "retrieval", "is", "impractical,", "and", "query-based", "searching", "is", "required.", "Queries", "are", "often", "expressed", "as", "ke...
[ "image databases", "benchmarking" ]
[ "image/video retrieval" ]
[ [ "O", "O", "O", "B", "I", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O",...
6278012
["i","ii. basic principles in hcf","a. hop-count computation","b. capturing legitimate hop-count val(...TRUNCATED)
[["P","NETWORKS","are","vulnerable","to","source","address","spoofing","[33]",".","For","example,","(...TRUNCATED)
[ "ip spoofing", "host-based", "hop-count", "ddos attacks" ]
[]
[["O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O(...TRUNCATED)
19016620
["introduction and background","methodology","interview data","observational data","data analysis","(...TRUNCATED)
[["One","persistent","theme","in","the","literature","on","distance","education","is","interaction."(...TRUNCATED)
[ "e-learning", "lectures", "awareness", "webcasting", "attention" ]
[]
[["O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","B","O","O","O","O","O(...TRUNCATED)
295349
["introduction","chained multistage amplifier","•","circuit implementation","figure 2. cma topolog(...TRUNCATED)
[["High-speed","multistage","amplifiers","(MA)","are","widely","used","in","optical","communications(...TRUNCATED)
[ "peaking technique", "multistage amplifier", "optical communications" ]
[ "wideband amplifier" ]
[["O","O","O","O","O","O","O","O","B","I","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O(...TRUNCATED)
478851
["introduction","on the nature of scientific notions","notions of complexity","information-based mea(...TRUNCATED)
[["While","most","will","have","a","strong","intuition","that","the","complexity","of","organisms","(...TRUNCATED)
[ "complexity", "physical complexity", "information" ]
[ "postnormal science" ]
[["O","O","O","O","O","O","O","O","O","B","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O(...TRUNCATED)
18833971
["introduction","symmetrical tridiagonal matrices from special spectral data","nonnegative symmetric(...TRUNCATED)
[["In","this","paper","we","discuss","the","inverse","eigenvalue","problem","for","real","symmetrica(...TRUNCATED)
[ "symmetrical tridiagonal matrices" ]
[ "matrix inverse eigenvalue problem" ]
[["O","O","O","O","O","O","O","O","O","O","O","B","I","I","O","O","O","O","O","O","O","O","O","O","O(...TRUNCATED)
125356
["introduction","tensor rank","orbit closure problem","the gct program for tensors","semigroups of r(...TRUNCATED)
[["Mulmuley","and","Sohoni","[25,","26]","proposed","to","view","the","permanent","versus","determin(...TRUNCATED)
["kronecker coefficients","tensor rank","matrix multiplication","multiplicities","geometric complexi(...TRUNCATED)
[]
[["O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O(...TRUNCATED)
3155983
["introduction","streaming data examples","motivation for punctuated streams","enhancing the example(...TRUNCATED)
[["T","HERE","are","many","examples","of","stream-processing","applications:","Financial","applicati(...TRUNCATED)
[ "stream iterators", "query operators", "continuous data streams", "stream semantics" ]
[ "continuous queries" ]
[["O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O(...TRUNCATED)
1369658
["introduction","cooperative multi-target tracking 2.1 architecture of ava and its functions","basic(...TRUNCATED)
[["Object","tracking","is","one","of","the","most","important","and","fundamental","technologies","f(...TRUNCATED)
[ "multi-target tracking" ]
[ "active vision agent", "cooperative distributed tracking", "tracking using multiple active camera" ]
[["O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O(...TRUNCATED)
9134261
["introduction -parallel and distributed simulations","paradigms for constructing simulators","async(...TRUNCATED)
[["The","simulation","of","physical","systems","is","an","important","tool","for","researchers","tha(...TRUNCATED)
[ "federated simulators", "distributed simulation", "frame relay" ]
[ "parallel simulation", "computer networks simulation" ]
[["O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O(...TRUNCATED)
End of preview. Expand in Data Studio
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

A dataset for benchmarking keyphrase extraction and generation techniques from long document English scientific papers. For more details about the dataset please refer the original paper - .

Data source -

Dataset Summary

Dataset Structure

Data Fields

  • id: unique identifier of the document.
  • sections: list of all the sections present in the document.
  • sec_text: list of white space separated list of words present in each section.
  • sec_bio_tags: list of BIO tags of white space separated list of words present in each section.
  • extractive_keyphrases: List of all the present keyphrases.
  • abstractive_keyphrase: List of all the absent keyphrases.

Data Splits

Split #datapoints
Train-Small 20,000
Train-Medium 50,000
Train-Large 90,019
Test 3413
Validation 3339

Usage

Small Dataset

from datasets import load_dataset

# get small dataset
dataset = load_dataset("midas/ldkp3k", "small")

def order_sections(sample):
  """
  corrects the order in which different sections appear in the document.
  resulting order is: title, abstract, other sections in the body
  """
  sections = []
  sec_text = []
  sec_bio_tags = []

  if "title" in sample["sections"]:
    title_idx = sample["sections"].index("title")
    sections.append(sample["sections"].pop(title_idx))
    sec_text.append(sample["sec_text"].pop(title_idx))
    sec_bio_tags.append(sample["sec_bio_tags"].pop(title_idx))

  if "abstract" in sample["sections"]:
    abstract_idx = sample["sections"].index("abstract")
    sections.append(sample["sections"].pop(abstract_idx))
    sec_text.append(sample["sec_text"].pop(abstract_idx))
    sec_bio_tags.append(sample["sec_bio_tags"].pop(abstract_idx))

  sections += sample["sections"]
  sec_text += sample["sec_text"]
  sec_bio_tags += sample["sec_bio_tags"]

  return sections, sec_text, sec_bio_tags

# sample from the train split
print("Sample from train data split")
train_sample = dataset["train"][0]

sections, sec_text, sec_bio_tags = order_sections(train_sample)
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Section names: ", sections)
print("Tokenized Document: ", sec_text)
print("Document BIO Tags: ", sec_bio_tags)
print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
print("\n-----------\n")

# sample from the validation split
print("Sample from validation data split")
validation_sample = dataset["validation"][0]

sections, sec_text, sec_bio_tags = order_sections(validation_sample)
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Section names: ", sections)
print("Tokenized Document: ", sec_text)
print("Document BIO Tags: ", sec_bio_tags)
print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
print("\n-----------\n")

# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]

sections, sec_text, sec_bio_tags = order_sections(test_sample)
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Section names: ", sections)
print("Tokenized Document: ", sec_text)
print("Document BIO Tags: ", sec_bio_tags)
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")

Output


Medium Dataset

from datasets import load_dataset

# get medium dataset
dataset = load_dataset("midas/ldkp3k", "medium")

Large Dataset

from datasets import load_dataset

# get large dataset
dataset = load_dataset("midas/ldkp3k", "large")

Citation Information

Please cite the works below if you use this dataset in your work.

@article{dl4srmahata2022ldkp,
  title={LDKP - A Dataset for Identifying Keyphrases from Long Scientific Documents},
  author={Mahata, Debanjan and Agarwal, Naveen and Gautam, Dibya and Kumar, Amardeep and Parekh, Swapnil and Singla, Yaman Kumar and Acharya, Anish and Shah, Rajiv Ratn},
  journal={DL4SR-22: Workshop on Deep Learning for Search and Recommendation, co-located with the 31st ACM International Conference on Information and Knowledge Management (CIKM)},
  address={Atlanta, USA},
  month={October},
  year={2022}
}
@article{mahata2022ldkp,
  title={LDKP: A Dataset for Identifying Keyphrases from Long Scientific Documents},
  author={Mahata, Debanjan and Agarwal, Naveen and Gautam, Dibya and Kumar, Amardeep and Parekh, Swapnil and Singla, Yaman Kumar and Acharya, Anish and Shah, Rajiv Ratn},
  journal={arXiv preprint arXiv:2203.15349},
  year={2022}
}
@article{lo2019s2orc,
  title={S2ORC: The semantic scholar open research corpus},
  author={Lo, Kyle and Wang, Lucy Lu and Neumann, Mark and Kinney, Rodney and Weld, Dan S},
  journal={arXiv preprint arXiv:1911.02782},
  year={2019}
}
@inproceedings{ccano2019keyphrase,
  title={Keyphrase generation: A multi-aspect survey},
  author={{\c{C}}ano, Erion and Bojar, Ond{\v{r}}ej},
  booktitle={2019 25th Conference of Open Innovations Association (FRUCT)},
  pages={85--94},
  year={2019},
  organization={IEEE}
}
@article{meng2017deep,
  title={Deep keyphrase generation},
  author={Meng, Rui and Zhao, Sanqiang and Han, Shuguang and He, Daqing and Brusilovsky, Peter and Chi, Yu},
  journal={arXiv preprint arXiv:1704.06879},
  year={2017}
}

Contributions

Thanks to @debanjanbhucs, @dibyaaaaax, @UmaGunturi and @ad6398 for adding this dataset

Downloads last month
77

Papers for midas/ldkp3k