You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CODE_OF_CONDUCT.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ This Code of Conduct applies both within project spaces and in public spaces whe
34
34
35
35
## Enforcement
36
36
37
-
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
37
+
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at goring@wisc.edu. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
38
38
39
39
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
Copy file name to clipboardExpand all lines: README.md
+74-47Lines changed: 74 additions & 47 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,51 +3,62 @@
3
3
[![Stargazers][stars-shield]][stars-url]
4
4
[![Issues][issues-shield]][issues-url]
5
5
[![MIT License][license-shield]][license-url]
6
+
[![codecov][codecov-shield]][codecov-url]
6
7
8
+

7
9
# **MetaExtractor: Finding Fossils in the Literature**
8
10
9
11
This project aims to identify research articles which are relevant to the [_Neotoma Paleoecological Database_](http://neotomadb.org) (Neotoma), extract data relevant to Neotoma from the article, and provide a mechanism for the data to be reviewed by Neotoma data stewards then submitted to Neotoma. It is being completed as part of the _University of British Columbia (UBC)_[_Masters of Data Science (MDS)_](https://masterdatascience.ubc.ca/) program in partnership with the [_Neotoma Paleoecological Database_](http://neotomadb.org).
10
12
11
13
**Table of Contents**
12
14
13
15
-[**MetaExtractor: Finding Fossils in the Literature**](#metaextractor-finding-fossils-in-the-literature)
The goal of this component is to monitor and identify new articles that are relevant to Neotoma. This is done by using the public [xDD API](https://geodeepdive.org/) to regularly get recently published articles. Article metadata is queried from the [CrossRef API](https://www.crossref.org/documentation/retrieve-metadata/rest-api/) to obtain data such as journal name, title, abstract and more. The article metadata is then used to predict whether the article is relevant to Neotoma or not.
41
48
42
49
The model was trained on ~900 positive examples (a sample of articles currently contributing to Neotoma) and ~3500 negative examples (a sample of articles unrrelated or closely related to Neotoma). Logistic regression model was chosen for its outstanding performance and interpretability.
43
50
44
51
Articles predicted to be relevant will then be submitted to the Data Extraction Pipeline for processing.
To run the Docker image for article relevance prediction pipeline, please refer to the instructions [here](docker/article-relevance/README.md)
47
58
48
-
To run the Docker image for article relevance prediction pipeline, please refer to the instruction [here](docker/article-relevance/README.md)
59
+
The model could be retrained using reviewed article data. Please refer to [here](docker/article-relevance-retrain/README.md) for the instructions.
49
60
50
-
## **Data Extraction Pipeline**
61
+
###**Data Extraction Pipeline**
51
62
52
63
The full text is provided by the xDD team for the articles that are deemed to be relevant and a custom trained **Named Entity Recognition (NER)** model is used to extract entities of interest from the article.
53
64
@@ -64,72 +75,89 @@ The entities extracted by this model are:
64
75
The model was trained on ~40 existing Paleoecology articles manually annotated by the team consisting of **~60,000 tokens** with **~4,500 tagged entities**.
65
76
66
77
The trained model is available for inference and further development on huggingface.co [here](https://huggingface.co/finding-fossils/metaextractor).
Finally, the extracted data is loaded into the Data Review Tool where members of the Neotoma community can review the data and make any corrections necessary before submitting to Neotoma. The Data Review Tool is a web application built using the [Plotly Dash](https://dash.plotly.com/) framework. The tool allows users to view the extracted data, make corrections, and submit the data to be entered into Neotoma.
First, begin by installing the requirements and Docker if not already installed ([Docker install instructions](https://docs.docker.com/get-docker/))
93
+
First, begin by installing the requirements.
94
+
95
+
For pip:
78
96
79
97
```bash
80
98
pip install -r requirements.txt
81
99
```
82
100
83
-
A conda environment file will be provided in the final release.
84
-
85
-
### Entity Extraction Model Training
86
-
87
-
The Entity Extraction Models can be trained using the HuggingFace API by following the instructions in the [Entity Extraction Training README](src/entity_extraction/training/hf_token_classification/README.md).
88
-
89
-
The spaCy model training documentation is a WIP.
101
+
For conda:
102
+
```bash
103
+
conda install environment.yml
104
+
```
90
105
91
-
### Data Review Tool
106
+
If you plan to use the pre-built Docker images, install Docker following these [instructions](https://docs.docker.com/get-docker/)
92
107
93
-
The Data Review Tool can be launched by running the following command from the root directory of this repository:
108
+
To launch the app, run the following command from the root directory of this repository:
94
109
95
110
```bash
96
111
docker-compose up --build data-review-tool
97
112
```
98
113
99
-
Once the image is built and the container is running, the Data Review Tool can be accessed at http://localhost:8050/. There is a sample "extracted entities" JSON file provided for demo purposes.
114
+
Once the image is built and the container is running, the Data Review Tool can be accessed at <http://0.0.0.0:8050/>. There is a sample `article-relevance-output.parquet` and `entity-extraction-output.zip` provided for demo purposes.
Please refer to the project wiki for the development and analysis workflow details: [MetaExtractor Wiki](https://github.com/NeotomaDB/MetaExtractor/wiki)
100
119
101
-
### Data Requirements
120
+
### **Data Requirements**
102
121
103
122
Each of the components of this project have different data requirements. The data requirements for each component are outlined below.
104
123
105
-
#### Article Relevance Prediction
124
+
#### **Article Relevance Prediction**
125
+
126
+
The article relevance prediction component requires a list of journals that are relevant to Neotoma. This dataset used to train and develop the model is available for download [HERE](https://drive.google.com/drive/folders/1NpOO7vSnVY0Wi0rvkuwNiSo3sqq-5AkY?usp=sharing). Download all files and extract the contents into `MetaExtractor/data/article-relevance/raw/`.
127
+
128
+
#### **Data Extraction Pipeline**
106
129
107
-
The article relevance prediction component requires a list of journals that are relevant to Neotoma. This dataset used to train and develop the model is available for download HERE. TODO: Setup public link for data download from project GDrive.
130
+
As the full text articles provided by the xDD team are not publicly available we cannot create a public link to download the labelled training data. For access requests please contact Simon Goring at <goring@wisc.edu> or Ty Andrews at <ty.elgin.andrews@gmail.com>.
108
131
109
-
#### Data Extraction Pipeline
132
+
#### **Data Review Tool**
110
133
111
-
As the full text articles provided by the xDD team are not publicly available we cannot create a public link to download the labelled training data. For access requests please contact Ty Andrews at ty.elgin.andrews@gmail.com.
134
+
Once the article relevance prediction and data extraction pipeline have been run, the output files can be used as input for the Data Review Tool. The Data Review Tool requires the following files:
112
135
113
-
### Development Workflow Overview
136
+
-`article-relevance-output.parquet` - output file from the article relevance prediction pipeline
137
+
-`entity-extraction-output.zip` - output file from the data extraction pipeline
114
138
115
-
WIP
139
+
These files should be present under a single folder and the path to the folder can be updated in the `docker-compose.yml` file, the default location is `data/data-review-tool` directory.
116
140
117
-
### Analysis Workflow Overview
141
+
### **System Requirements**
118
142
119
-
WIP
143
+
The project has been developed and tested on the following system:
120
144
121
-
### System Requirements
145
+
- macOS Monterey 12.5.1
146
+
- Windows 11 Pro Version: 22H2
147
+
- Ubuntu 22.04.2 LTS
122
148
123
-
WIP
124
149
125
-
### **Directory Structure and Description**
150
+
The pre-built Docker images were built using Docker version 4.20.0 but should work with any version of Docker since 4.
151
+
152
+
## **Directory Structure and Description**
126
153
127
154
```
128
155
├── .github/ <- Directory for GitHub files
129
156
│ ├── workflows/ <- Directory for workflows
130
157
├── assets/ <- Directory for assets
131
158
├── docker/ <- Directory for docker files
132
159
│ ├── article-relevance/ <- Directory for docker files related to article relevance prediction
160
+
│ ├── article-relevance-retrain/ <- Directory for docker files related to article relevance retraining
133
161
│ ├── data-review-tool/ <- Directory for docker files related to data review tool
134
162
│ ├── entity-extraction/ <- Directory for docker files related to named entity recognition
135
163
├── data/ <- Directory for data
@@ -142,9 +170,6 @@ WIP
142
170
│ │ ├── processed/ <- Processed data
143
171
│ │ └── interim/ <- Temporary data location
144
172
│ ├── data-review-tool/ <- Directory for data related to data review tool
145
-
│ │ ├── raw/ <- Raw unprocessed data
146
-
│ │ ├── processed/ <- Processed data
147
-
│ │ └── interim/ <- Temporary data location
148
173
├── results/ <- Directory for results
149
174
│ ├── article-relevance/ <- Directory for results related to article relevance prediction
150
175
│ ├── ner/ <- Directory for results related to named entity recognition
@@ -169,10 +194,10 @@ This project is an open project, and contributions are welcome from any individu
0 commit comments