Skip to content
Mikkel Winther Pedersen edited this page Oct 4, 2023 · 15 revisions

Ancient DNA Authentication in Time series data

The output from metaDMG now counts some basic stats that you would want to extract for your project but also for potential future publications or like, such as the total number of reads classified to any taxonomic level, total number of reads classified within a superkingdom and like. This can be obtained from the files in the data/lca/ folder and using simple text parsing comments that you have already been using in the course first two days ('wc -l' or 'grep -c'). Please collect total number of reads classified per sample, total number of reads classified to plants per sample and total reads classified to animals per sample.

Please also ensure you have performed the last step in the tutorial from yesterday, e.g. converting the metaDMG output to a .csv file. Today we will explore a few examples of how to visualize such data in R.

First activate r environment and R,

conda activate r
R

Next, load the libraries needed and the data, if you are missing some libraries you can install these by install.packages("name.of.library")

library(tidyverse) 
library(reshape2)
library(vegan)
library(rioja)
library(ggplot2)
library(dplyr)
library(gghighlight)

First, set working directory

setwd("~/course/wdir/mapping/plots/")

Import data

df <- read_csv("~/course/wdir/mapping/metaDMGresults.csv")

Count number of columns in dataframe, just to familiarize your self with it.

ncol(df)

Print all sample names

unique(df$sample)

Import metadata

metaDATA <- read.delim("~/course/data/shared/metadata/metadata.tsv")

Replace header "sample_name" with "sample"

colnames(metaDATA)[colnames(metaDATA) == "sample_name"] <- "sample"
colnames(metaDATA)[colnames(metaDATA) == "years_bp"] <- "YearsBP"

Merge the metadata and dataframe by sample

dt <- merge(df, metaDATA, by = "sample")

Now check that new columns have been added to the dt dataframe

ncol(df) < ncol(dt)

Print all different ages

unique(dt$YearsBP)

Print all column names of the table

colnames(dt)

Maximum amount of damage (filtered for)

DamMin2 = 0.00

MAP_Significance

MapSig2 = 0

Minimum reads for parsing taxa

MinRead2 = 100

Minimum mean read length

MinLength = 35

Subsetting the table by plant (Viridiplantae) genus using grepl and filter, parameters you need to set and possible add more?? Please think about how your group think it should be done. But first would probably be best to explore the data with less stringent filters, and plot these, then later on the basis of the data make decisions on how you want it filtered.

dt2 <- dt %>% filter(MAP_damage > DamMin2, N_reads >= MinRead2, mean_L > MinLength, MAP_significance  > MapSig2,  grepl("Viridiplantae",tax_path), grepl("\\bgenus\\b", tax_rank), grepl("", sample))

Now plot your data, you can add other variables in the gghighlight to illustrate different cut-offs, or other types of values etc.

pdf(file = "aeCourse.DNAdamageModelJitterPlot.pdf", width = 8, height = 4)
ggplot() +
  geom_jitter(data = dt2, aes(x=as.numeric(YearsBP), y=MAP_damage, size = N_reads), alpha =0.5) +
  gghighlight(N_reads > 500) +
  xlab("Years BP") +
  ylab("DNA damage") +
  labs(color = "Values for taxa with \n>500 reads", size = "Number of reads")
dev.off()

Plot plant taxa, highlight taxa with more than 500 reads and add the min, max and median. save as pdf

pdf(file = "aeCourse.DNAdamageLRJitterPlot.pdf", width = 8, height = 4)
ggplot() +
  geom_jitter(data = dt2, aes(x=as.numeric(YearsBP), y=MAP_damage, size = MAP_significance), alpha =0.5) +
  gghighlight(N_reads > 500) +
  xlab("Years BP")+
  ylab("DNA damage") +
  labs(color = "Values for taxa with \n>500 reads", size = "Significance \nfor Taxa with >500 reads")
dev.off()

Create filtered table for DNA damage model

filtered_data <- dt2 %>% filter(N_reads >= 500)

Now you should make decisions on how the data should be filtered. You can use the

MapSig3 = 3
MinRead3 = 100
MinLength3 = 35

Subsetting the table using grepl and filter, parameters you need to set and possible add more?

filtered_data_metazoan <- dt %>% filter(N_reads >= 10, mean_L > MinLength3, MAP_significance  > MapSig3,  grepl("Metazoa",tax_path), grepl("\\bgenus\\b", tax_rank), grepl("", YearsBP))
unfiltered_data_metazoan <- dt %>% filter(N_reads >= 2, mean_L > MinLength3, MAP_significance  > 1,  grepl("Metazoa",tax_path), grepl("\\bgenus\\b", tax_rank), grepl("", YearsBP))
filtered_data_viridiplantae <- filtered_data %>% filter(N_reads >= 100, mean_L > MinLength3, MAP_significance  > LR3,  grepl("Viridiplantae",tax_path), grepl("\\bgenus\\b", tax_rank), grepl("", YearsBP))
unique(filtered_data_viridiplantae$YearsBP)

You can also make a smaller table with only values of your choice example below.

select(filtered_data_viridiplantae, tax_name, MAP_damage, MAP_significance, N_reads, YearsBP)

Count number of unique plant taxa

unique(filtered_data_viridiplantae$tax_name)

Make wide table for downstream plot and data wrangling of the plants.

data_wide_plants <- dcast(filtered_data_viridiplantae, tax_name ~ YearsBP, value.var="N_reads", fun.aggregate = sum)
n <- ncol(data_wide_plants)
b2 <- data_wide_plants[,2:n]
rownames(b2) <- data_wide_plants$tax_name
b2[is.na(b2)] <- 0 #set all NAs as zeros

Prints sum of samples and taxa and depths names

colSums(b2)
rowSums(b2)
colnames(b2)

Test, if this one fails it might have text in the number of reads coloumn

b2[is.na(b2)]=0

Create percentage table, by taking the number of columns and divides these with sum of all reads in the column, remove the -1 and you will plot the Nreads column on later plots

i=ncol(b2)
b3=as.matrix(b2[,seq(1,i)])  

b4 <- prop.table(data.matrix(b3), margin=2)*100 # makes proportion table, needs 2 margins e.g. header and 1st row names
colSums(prop.table(b4, margin=2)*100) # prints sum of column, should give 100 for each

Next we will transpose the table, for plotting it as a strat.plot

b5 <- t(b4)

and set the variable z to be the years BP which is now row headers.

z <- as.numeric(rownames(b5)) # depth/depth

and plot it on a stratigraphic plot (typical pollen type plot)

pdf(file = "aeCourse.Stratplot_Plants_area.pdf", width = 15, height = 5)
strat.plot(b5, y.rev=TRUE, plot.line=TRUE, plot.poly=TRUE, plot.bar=FALSE, lwd.bar=10, sep.bar=TRUE, scale.percent=TRUE, xSpace=0.01, x.pc.lab=TRUE, x.pc.omit0=TRUE, srt.xlabel=45, las=2, exag=TRUE, exag.mult=5, ylabel = "years BP",  yvar = z)
dev.off()

Now we will convert the wide table into a long table format to plot it with ggplot

y <- ncol(b5)
b6 <- melt(b5[,1:y])
sapply(b6, class)
colnames(b6) <- c("YearsBP","Taxa", "percentage")

p1 <- ggplot(b6, aes(y=Taxa, x=YearsBP, fill=percentage)) +   geom_tile(colour="lightgrey") +
  theme_minimal() + scale_fill_gradient(low="white", high="darkgreen") + scale_y_discrete(limits=rev)
p1 + theme(axis.text.x = element_text(angle = 45, vjust = 1, hjust =1)) + ggtitle("percentage of taxa plotted as heatmap") +
  xlab("YearsBP") + ylab("YearsBP") + labs(fill = "percentage %")

These were just a few examples, it is possible to replace values and plot data in a wide variety of ways. Which you need and will use depends on your data, your study and the story you want to tell.

Euka - a tetrapodic and arthropodic taxa detection from modern and ancient environmental DNA using pangenomic reference graphs

If you are looking for other available options just run ~/course/data/vgan/bin/vgan euka for the manual.

Within the vgan folder you you can find a folder called bin where the vgan executable can be found. So, to run euka you can just use the following command:

~/course/data/vgan/bin/vgan euka -fq1 <(zcat ~/course/wdir/mapping/PRI-TJPGK-CATN-96-98.fq.gz.vs.fq.gz) -o PRI-TJPGK-CATN-96-98 -t 5

If you want to input multiple fastq files at once, please use a file descriptor otherwise euka will not recognize all the files. For example:

or

~/course/data/vgan/bin/vgan euka -fq1 <(zcat ~/course/wdir/mapping/*vs.fq.gz) -o all_samples -t 5

Further authentication

Now that we have produced/found out likely ancient dataset for taxa showing damage with mean read length > 35 and also verified a few animal taxa in the profiles we could go back and use the Rscripts for the read length plotting we used yesterday and replace the overall super kingdoms with taxa that would be interesting and or necessary to have further verifications for.

Addition of references to reference database

conda activate day1

Nuclear and mito genomes can be downloaded from any source here we use NCBI.

Now lets check the accession number/ID in the header of the faasta

head -1 NC_001941.1.fa

And then check that the accession to taxid matches in the ncbi taxonomy

zgrep NC_001941 ../../../data/shared/taxonomy/acc2taxid.map.gz

Normally these would fit, but for the course here we made a custom taxonomy/database and changed the accession slightly. So let's edit the header to match our taxonomy.

and change it to NC_001941.1

vim NC_001941.1.fa

bowtie2-build NC_001941.1.fa NC_001941.1.fa

for file in *vs.fq.gz
do
bowtie2 -U ../PRI-TJPGK-CATN-224-226.fq.gz.vs.fq.gz -x NC_001941.1.fa --no-unal --threads 5 | samtools view -bS - > PRI-TJPGK-CATN-224-226.fq.gz.vs.fq.gz.ovis.bam 
done &> ovis_map.log

before competitive mapping

samtools depth -a PRI-TJPGK-CATN-224-226.fq.gz.vs.fq.gz.ovis.sort.bam
samtools depth -a PRI-TJPGK-CATN-224-226.fq.gz.vs.fq.gz.ovis.sort.bam | cut -f3
samtools depth -a PRI-TJPGK-CATN-224-226.fq.gz.vs.fq.gz.ovis.sort.bam | cut -f3 | datamash mean 1

if you forget to set the -a option the depth it will be depth per covered base

samtools depth -a PRI-TJPGK-CATN-224-226.fq.gz.vs.fq.gz.ovis.sort.bam | cut -f3 | datamash mean 1

Phylogenetic placement and Pathphynder analysis

1. Part A. Extraction of reads from lca file

Print sample list based on name files

ls *lca.txt > sample.list

Taxa list file

nano name_genus
> save it as > 'taxa.list'

Create files based on SAMPLE.list and Taxa.list

while read -r line
do
arr=($line)
lib=${arr[0]}
echo $lib
cat taxa.list | parallel -j20 "grep {} $lib | cut -f1,2,3,4,5,6,7 -d: > $lib.{}.readID.txt"
done < sample.list

Remove lines in file were not found name_genus

wc -l *.readID.txt| awk '$1 == 0' | awk '{print $2}' > rm.list
cat rm.list | parallel -j5 "rm {}"

Create sum-up file total sequences found per sample

wc -l *.readID.txt| paste > tot_genus_sequences.txt

Create fastq from readIDs

for infile in *readID.txt
do
bname=$(basename $infile)
bname1=$(echo $bname | sed 's/.readID.txt*/.fq/')   
bname2=$(echo $bname | cut -f1 -d. )
echo $bname1
echo $bname2
seqtk subseq path_to_files_fq/adap2_kmer2_*$bname2*.fq.pp.rmdup.fq $bname > $bname1 &
done

1. Part B: Prepare fasta reference files

Download reference mitochondrial genomes from NCBI or directly on NCBI Nucleotide database

Send to > File > Fasta > Create File

The samples used here were based on paper Taylor et al. 2021:Taylor, William TT, et al. "Evidence for early dispersal of domestic sheep into Central Asia." Nature Human Behaviour 5.9 (2021): 1169-1179.)

See ReferenceTable1.xlsx

Be sure to have removed tab and , " , ' , : from sequence

sed "s/:/_/g" in.fa > out.fa
sed "s/ /_/g" in.fa > out.fa
sed "s/;/_/g" in.fa > out.fa
sed "s/__/_/g" in.fa > out.fa
sed "s/(/./g" in.fa > out.fa
sed "s/)/./g" in.fa > out.fa

Concatenate reference fasta files for each Haplogroup

cat *.fasta > in_to_align.fa

Alignment Haplogroup references

mafft --thread 5 --leavegappyregion Ovis_aries_musimon2.fasta > Ovis_aries_musimon2_aligned.fa

Create a Consensus sequence for each Haplogroup

in Geneious (if available): Tools > Generate Consensus Sequence > majority base calling

Or use Python BioAlign's consensus function (python) - biophyton needs to be installed: conda install -c conda-forge biopython

python /path_to_Script/get_consensus.txt Reference_aligned.fa Reference_output_Consensus.fa  

Header consensus needs to be renamed

for file in *.fa
do
bn=`basename $file .fa`
awk '/^>/ {gsub(/.fa(sta)?$/,"",FILENAME);printf(">%s\n",FILENAME);next;} {print}' $file > ${bn}_renamed.fa ; rm $file
done

Concatenate all consensus sequences

cat *Consensus.fa > MSA_to_align.fa

Alignment references

mafft --thread 5 --leavegappyregion MSA_to_align.fa  > MSA_out_aligned.fa

Build a reference Tree (check of a Maximum Likelihood Tree)

raxml-ng --msa MSA_out_aligned.fa --model GTR+G --prefix output_name --threads 2

If any warnings, run the '*reduced.phy' files to get a clean Tree.

2. Mapping to Consensus references of Ovis Haplogroups

Build indexes for database

for file in *.fa
do
bname=$(basename $file)
bowtie2-build -f $file DB_$bname
done

Mapping in for loop

for file in $(pwd)/*.fq
do
bname=$(basename $file)
echo $bname
bname2=$(echo $bname | cut -f1-3 -d_)
echo $bname2
bname3=$(echo $bname | sed 's/.fq.pp.rmdup.fq*/_holi/')
basepath=$(pwd)/
basefolder=$basepath
echo $basepath
echo $bname3
mkdir $basepath$bname3
cd $basepath$bname3
done

mapping to the different databases

for DB in /path_to_DB/name_DB
do
echo Mapping adap2_kmer2_$bname.pp.rmdup.fq against $DB
bowtie2 --threads 5 -x $DB -U $file --no-unal | samtools view -bS - > $bname2.$(basename $DB).bam
done

consider different settings for bowtie2

for DB in /path_to_DB/name_DB
do
echo Mapping adap2_kmer2_$bname.pp.rmdup.fq against $DB
bowtie2 --threads 5 -x $DB -U $file --no-unal | samtools view -bS - > $bname2.$(basename $DB).bam
done

Sorting bam files

for bam in *.bam
do
echo Sorting bam files
samtools sort -O BAM -o sort.$bam $bam
done

getting coverage stats

for sort in sort.*.bam
do
echo Bamcov $sort
/path_to_bamcov/bamcov -m -w0 $sort | paste > bamcov_hist_$sort.txt
/path_to_bamcov/bamcov $sort | paste > bamcov_table_$sort.txt
done

cd $basepath
done

Important to assess the coverage of the mapped reads to different mitogenomes before proceeding with the tree.

4. Prepare alignement files for tree

move all sort.bam in a folder

mkdir new_folder

for file in *holi/sort.*.bam
do
mv $file new_folder
done

consensus of reads bam > fast

for file in sort*.bam
do
angsd -doFasta 2 -doCounts 1 -i $file -out $file.consensus.fa
done

gunzip

for file in *.consensus.fa.fa.gz
do
bname=$(basename $file)
echo $bname
gunzip $file
done

Delete file size 0

for file in *.consensus.fa.fa
do
bname=$(basename $file)
echo $bname
find $file -size 0 -delete
done

rename headers

for file in *.fa.fa
do
bn=`basename $file .fa.fa`
awk '/^>/ {gsub(/.fa(sta)?$/,"",FILENAME);printf(">%s\n",FILENAME);next;} {print}' $file > ${bn}_renamed.fa ; rm $file
done

5. (Optional) Building Maximum Likelihood Tree

Cconcatenate sequences

cat *renamed.fa MSA_not_aligned.fa > Allref_mycons.fa

alignment

mafft --thread 40 --leavegappyregion Allref_mycons.fa > Allref_mycons_aligned.fa

tree

raxml-ng --msa output_name.raxml.reduced.phy --model GTR+G --prefix output_nameRED --threads 2

rerun using reduced.phy file if optimised alignment by raxml is suggested

6. Building Tree in BEAST: this can take some time, tree file should be provided already.

We ran a tree in Beast based on the msa file for references and for references with sample files. Parameters used in BEAST (v.1.10.4) with 20,000,000 replicates, applying the HKY model and a Coalescent Constant Population as prior. BEAUTI > import alignment, set parameters, save file as Aln_mafft_taxa_references.xml

run BEAST or BEAST2 on server:

beast -threads n Aln_mafft_taxa_references.xml

Check log file TRACER > open .log file and check burnin, if not okay, increase replicates

Saving Tree

TreeAnnotator → open .trees file and save it as a Maximum Credibility Tree with 10% Burnin when saving/naming the file you have to add the file extension .nexus

If using BEAST2 the tree will be saved with .tree extension that can be read by FIGTREE.

Save Newick Tree

FIGTREE > Open tree.nexus > export tree as Newick > save with .nwk #Visualise node labels/branch labels as posterior values

Check .nwk file

Check with text editor if tree.nwk has any ‘ symbols, if so, remove them

7. Pathphynder

to call those SNPs in a given dataset of ancient samples and find the best path and branch where these can be mapped in the tree.

create VCF with snp-sites from multiple sequence alignment (MSA) file

snp-sites -v -c -o Aln_mafft_taxa_references.vcf Aln_mafft_taxa_references.fa

fix vcf file '/home/mxd270/R/x86_64-pc-linux-gnu-library/4.3' installed stringr package

Rscript /path_to_Rscript/fix_vcf.R Aln_mafft_taxa_references.vcf Aln_mafft_taxa_references_output.vcf

fix consensus naming problem in vcf file (replace 1’s with “consensus”)

awk '{ if ($1 == "1") $1="consensus";}1' Aln_mafft_taxa_references_output.vcf | sed 's/ /\t/g' > Aln_mafft_taxa_references_fixed_output.vcf

create consensus for MSA.fa and index it###

python /path_To_Script/get_consensus.txt Aln_mafft_taxa_references.fa Cons_Aln_mafft_taxa_references.fa

bwa index Cons_Aln_mafft_taxa_references.fa

Create directory in pathPhynder_analysis folder

mkdir map_to_cons

bwa mapping of Ovis reads to consensus

bwa aln -l 1024 -n 0.001 -t 10 /path_to_folder/Consensus_Mafft_All_references.fa /path_to_folder/library.fq
| bwa samse /path_to_folder/Consensus_Mafft_All_references.fa  - /path_to_folder/library.fq
| samtools view -F 4 -q 25 -@ 10 -uS - | samtools sort -@ 10 -o library.sort.bam

Count number of mapped reads

samtools view -c file.bam

8. Run pathPhynder

Install pathPhynder

git clone https://github.com/ruidlpm/pathPhynder.git

make a new .bash_profile

touch ~/.bash_profile vi ~/.bash_profile

ADD manually these lines, using the path where pahPhynder is installed:

"# .bash_profile alias pathPhynder="Rscript /path_to_pathPhynder_folder/pathPhynder.R" "

save file

Esc :wq

source it

source ~/.bash_profile

test it

pathPhynder -h

Create directory in pathPhynder_analysis folder

mkdir pathphynder_results

Assign informative SNPs to tree branches

conda activate day2

phynder -B -o /path_to_folder/branches.snp /path_to_folder/Mafft_All_BEAST4.nwk /path_to_folder/Mafft_All_references_fixed_consensus.vcf

conda activate r

Install samtools

mamba install -c bioconda samtools

Create directory in pathPhynder_analysis folder

mkdir pathphynder_results

Assign informative SNPs to tree branches

phynder -B -o /path_to_folder/branches.snp /path_to_folder/Mafft_All_reference.nwk /path_to_folder/Mafft_All_references_fixed_consensus.vcf

Prepare data - this will output a bed file for calling variants and tables for phylogenetic placement

pathPhynder -s prepare -i /path_to_folder/Mafft_All_reference.nwk -p taxa_pathphynder_tree -f /path_to_folder/branches.snp -r /path_to_folder/Consensus_Mafft_All_references.fa

PathPhynder command1: ### only transversions

pathPhynder -s all -t 100 -m transversions -i /path_to_folder/Mafft_All_references.nwk -p /path_to_folder/taxa_pathphynder_tree -l /path_to_folder/bamlist.txt -r /path_to_folder/Consensus_Mafft_All_references.fa

PathPhynder command2: ###transitions and transvertions

pathPhynder -s all -t 100 -i /path_to_folder/Mafft_All_references.nwk -p /path_to_folder/taxa_pathphynder_tree -l /path_to_folder/bamlist.txt -r /path_to_folder/Consensus_Mafft_All_references.fa

Visualise pdf files with the results.

Population genomics

In this tutorial we will investigate how the ancient bear DNA we retrieved from our environmental samples relates to modern black bear populations. For this, we use a pre-generated dataset of SNP genotype data for a modern bear reference panel with 79 individuals from five black bear populations, as well as two polar bear samples. Our three ancient environmental bear samples are represented by pseudo-haploid genotypes, which we generated by selecting a random high-quality allele for SNP positions in the reference panel which were covered by mapped sequencing reads in the respective sample. ​

First, we activate our environment and install some R packages using mamba ​

conda activate day3
mamba install r-tidyverse r-ggrepel

​ now let's set up a working folder and copy the data ​

mkdir popgen
cd popgen
cp ~/course/data/day3/popgen/* .

​ The dataset is provided in PLINK binary format (https://www.cog-genomics.org/plink/), a program widely used for SNP genotype data managment and analysis. The dataset is composed of three files ​

modern_polar_mexican.bed
modern_polar_mexican.bim
modern_polar_mexican.fam

​ You can learn more about the different file formats here https://www.cog-genomics.org/plink/1.9/formats

​ As an example of how we can use PLINK, the following command calculates some basic missing data summaries: ​

plink --bfile modern_polar_mexican --missing --out modern_polar_mexican

​ One of the resulting output files

modern_polar_mexican.imiss

contains per-individual missing genotype rate (6th column F_MISS).

Examine the file and find the entries for the three Mexican samples. How much missing data do they show? Would you expect to be able to obtain meaningful results with this amount of data?

​ We will now try to make sense of our data by carrying out two types of analyses. In the first part, we will use principal component analysis (PCA) for a first exploration of the structure in our dataset. Following this, we will use the so-called 'f-statistic' framework to perform more in-depth statistical analyses and test different hypotheses regarding the relationship of the ancient bears with the different modern populations. ​

PCA

​ We will use the smartpca program from the EIGENSOFT package (https://github.com/DReichLab/EIG), a widely used tool for PCA on genotype data. It has a number of features useful for the analysis of ancient DNA data, in particular the option to "project" individuals with poor quality data / high missingness onto principal components inferred from a set of high quality reference panel. ​

In order to carry out the smartpca analysis, we need to prepare a file setting the parameters for the analysis. Below is an example parameter file modern_all.smartpca.par ​ ​

genotypename:    modern_polar_mexican.bed
snpname:         modern_polar_mexican.bim
indivname:       modern_polar_mexican.fam
evecoutname:     modern_all.evec
evaloutname:     modern_all.eval
familynames:     NO
numoutevec:      20
numthreads:	 2
numoutlieriter:	 0
poplistname:   	 modern_all.pops.txt
lsqproject:  YES
pordercheck:  NO
autoshrink: YES

​ The first five lines specify the input and output data, respectively. Other important parameters included are

numoutevec - the number of princiapl components to be returned

poplistname - name of a file containing the population IDs for samples to be used to infer the principal components. When using PLINK format, population IDs for individuals have to be specified in column 6 of the .fam file. All individuals in the dataset with population ID not included in the file will be projected onto the inferred components

lsqproject - PCA projection algorithm appropriate for samples with high amount of missing data.

​ To run pca, we use the command ​

smartpca -p modern_all.smartpca.par | tee modern_all.smartpca.log

​ which will run smartpca and print its log messages both to stdout and to a file modern_all.smartpca.log. Once the run is complete, the PCA coordinates for each sample (i.e. the eigenvectors) can be found in the output file modern_all.evec.

​ To visualize the PCA results, we can use the provided R script plot_pca.R, which outputs simple bi-plots of all PCs in the output file: ​

Rscript plot_pca.R modern_all.evec label_inds.txt modern_all.pdf

​ The script takes three command line arguments:

  • the filename for the smartpca eigenvector results
  • a file with a list of IDs for samples to be highlighted in the plot
  • the filename for the output file in pdf format ​

The analysis with the parameter file modern_all.smartpca.par outlined here performs a PCA projecting the ancient bear samples onto the full set of modern reference samples, including both black bears and polar bears. Explore the output and answer some of the following questions: ​

  • Which populations are separated on the first few PCs?
  • Where do the ancient bear samples fall, and how can we interpret their position?
  • Is there any difference in PCA positions between the three bear samples, and if so, what could be the interpretation?

​ Once you have explored these results and familiarized yourself with the analysis, you can run PCA on other subsets of the data. A parameter file and corresponding poplistname file for a subset excluding the two polar bears is provided as modern_blackbear.smartpca.par. Explore the output of that and try to answer some of the same quesions as above. If you wish to explore other subsets, you can create your own poplistname file with your populations of interest, and create a corresponding smartpca parameter file to run the analysis ​

f-statistics

​ After our exploratory analysis using PCA, we would like to dive a bit deeper into the population genetics of our dataset. For this we will use the f-statistic framework, implemented in the R package admixtools. We activate the R environment and install some needed packages ​

conda activate r
mamba install bioconductor-ggtree r-ape

​ An R script with an example analysis is provided in the file f_statistics.R. You can open this file in VS Code Studio, and use an interactive R terminal to work through the commands