Better (but not perfect) HMW DNA extraction protocol

I wrote some time ago about the protocol I used to prepare HMW DNA for the new HA412 assembly. The advantage of that protocol is that it doesn’t need much tissue to start with, it’s quick and can work quite well. However, it is also quite unreliable, and will sometimes fail miserably.

To prepare HMW DNA for H. anomalus I tried a different protocol, suggested by Allen Van  Deynze at UC Davis. They used it on pepper to prepare HMW DNA for 10X linked reads (the same application I had in mind), and obtained fragments of average size ~150-200 Kb. The resulting 10X assembly was quite spectacular (N50 = 3.69 Mbp for a 3.21 Gbp genome) and was recently published. Continue reading

Even-more-diluted BigDye

It turns out that you can use half the amount of BigDye that is recommended by NAPS/SBC for Sanger sequencing and have no noticeable drop in sequence quality. The updated recipe for the working dilutions:

BigDye 3.1 stock      0.5 parts

BigDye buffer            1.5 parts

Water                       1 part

I will prepare all future working dilutions using this recipe and put them in the usual box in the common -20ºC freezer. For more details on how to prepare a sequencing reaction see this post, and for how to purify them see this or this posts.

Streamlined GBS protocol

We already have a GBS protocol on the lab blog, but since it contains three different variants (Kate’s, Brook’s and mine) it can be a bit messy to follow. Possibly because I am the only surviving member of the original trio of authors of the protocol, the approach I used seems to have become the standard in the lab, and Ivana was kind enough to distill it into a standalone protocol and add some additional notes and tips. Here it is!

Simplified GBS protocol 2017

BigDye 3.1

While cleaning up the freezer a few months back we found encrusted in a block of ice at the bottom of the common lab freezer a box with a seizable amount of BigDye 3.1, the reagent used to prepare samples for Sanger sequencing (basically a PCR mastermix with labelled nucleotides). As that is expensive stuff (what we have is worth 3-4000 dollars), I tested it to see if would still work. All the tests I did were rated “great sequence” when I got the results back from NAPS, so the BigDye is fine.

Thanks to a donation of buffer from NAPS, I have diluted some of the BigDye to the same working concentration NAPS uses (1 part of BigDye, 1.5 parts of buffer, 0.5 parts of water). Follow the instructions on the NAPS website (http://naps.msl.ubc.ca/dna-sequencing/dna-sequencing-services/user-prepared/) to prepare your sample, and use 3 µl of the diluted BigDye. There are eight aliquots of about 150 µl each (50 reactions) in a box with a yellow “BigDye” label in the common -20 ºC freezer. If you plan to do only a few reactions at a time, consider making smaller aliquots for your personal use (BigDye doesn’t like repeated freeze-thaw cycles). To avoid confusion I kept the concentrated BigDye and dilution buffer in their Applied Biosystem box on the door of the first freezer on the right in the freezer room. If we run out of dilution just let me know, and I’ll be happy to prepare more.

How to do effective SPRI beads cleaning

SPRI beads cleaning is one of the most repetitively used step during library preparations and probably the step where most of us lose a lot of precious DNAs.

Losing DNA scared me so much (because it can be observable) that I hesitated a lot before trying to use beads to concentrate genomic DNA, because usual rate of recovery are ~50%.

giphy

Hopefully with some practice and a lot of patience, it is possible to reach 90% recovery. How to lose as little DNA as possible? Here are some guidelines: Continue reading

How to do GBS libraries with “difficult” DNA samples

First of all, let’s be clear about it: Having good amount of high quality DNA should be a starting point for all projects. Recently, we had this conversation at lab meeting about the “one rule” to succeed in establishing a lab (quoting Loren): “Don’t try to save money on DNA extraction. Working with high quality DNA reduces cost at all downstream steps, even on bioinformatics”.

However, if you need to work with “historical” DNA samples from the lab (I genotyped old DNA plates at least 8 years old) or DNA from collaborator for which you have no control over the DNA quality and/or no more plant tissue to redo DNA extraction, here are some tips on how to get a maximum out of (almost) nothing.

I started the GBS protocol with 100ng of DNA, it works. However, if you want to save yourself a lot of time and the lab some money on repeating PCR, repeating samples, repeating a lot of qubit measurement, start with 200ng.

A) If some of your DNA samples are <8.5 ng/ul (100ng protocol):

Among the 1500 DNA samples I received from a collaborator, 134 did not meet the requirement (>8.5 ng/ul) to start the GBS. I thought about concentrating these DNA with different methods: 1) using beads: you need to be ready to loose 50% of the DNA; 2) speedvac: I did not find one (supposedly there is one in the Adams lab?) and I was concerned about over-concentrating TE in the same time as DNA.

Hopefully, if you look at the digestion step, a large volume of the digestion mix is water/tris. By removing this water, I was able to include in the protocol DNA with concentration >5.8ng/ul, recovering half of my problematic samples. Just be extra-careful when pipetting the 2.8 ul of “water-free” digestion master mix. I had good PCR amplification for these samples.

B) If you are desperate:

I used whole genome amplification (WGA) prior to starting the GBS protocol to increase the DNA concentration of “historical” DNA samples. You will probably recover most of your DNA samples if they are more than 1ng/ul.

However, DO NOT MIX genome amplified and plain genomic DNA on the same plate for sequencing, especially if you pool your library before doing PCR and qubiting. The WGA samples amplify much better and Sariel showed me libraries in which few WGA samples took a large part of the sequencing reads. It’s a recipe for disaster and high missing data.

My strategy was to qubit all the DNA plates and estimate the remaining volume. If the remaining DNA sample was less than 100ng, I did WGA but I moved these samples to specific WGA plates. It’s a bit more work because if your samples are already in plates, you will need to relocate all your samples. From my experience, it’s worth it.

Whole Genome Amplification

I used Whole Genome Amplification (WGA) to recover some very old DNA and include a maximum of samples into GBS plates for a mapping project. I will do another post about that, but here I would like to resume how I did the whole genome amplification.

For information on how the whole genome amplification method works, check the previous post by Moira.

I used the Qiagen Repli-g Mini Kits with the amounts divided by half.

The protocol is the following:

2.5 ul of DNA

2.5ul of buffer D1 (DLB + nuclease-free H2O)

Incubate for 3min at room temperature

Add 5ul of buffer N1 (stop solution + nuclease-free H2O)

Vortex and centrifuge

Add 15ul of master mix (repli-f reaction buffer + DNA polymerase)

Incubate at 30oC for up to 16h and inactivate enzymes at 65oC

I found that 6h was good for the amount of DNA I was hoping to recover. I recovered at least 50x the amount I started with, typically starting with concentrations between 1-10ng/ul and getting on average 60ng/ul out of the WGA protocol.

Depending on the use of the DNA, you may want to play with the incubation time (e.g. keep it short), especially if you plan to use PCR based approach later on (almost everything is PCR-based..).

Link to user manual for new, old drying ovens in the Biosci lab.

There are two new-to-the-lab drying ovens in the Biosci lab.  They appear to be older.    They do not have digital displays of the inside temperature.  Neither do they have temperature settings on the dial.  Rather, they have dedicated glass thermometers and temperature settings 1-10.  I just installed a Hobo temperature logger, set to take readings every ten minutes.  I’ll adjust the settings over the next few days to try and get a bead on what those temperature setting numbers on the outside mean.

There is a link above to the drying-oven’s manual, but it is very basic and I didn’t find it to be much help.

September 9, 2015: Follow-up

Regarding the older-model drying ovens (from Velland’s lab).

Center temp reading:

Setting 1: 26 deg. C.

Setting 2: 51 deg. C.

Setting 3: 65 deg. C.

Note, there is a marked discrepancy between readings on the glass thermometers in the oven and the Hobo temp logger upon which the above information is based.  Glass thermometers appear to read a lower temperature than the Hobo logger, up to 10 degrees difference in some cases.  The Hobo used during this trial was new and factory- calibrated.  At least one of the glass thermometers appears to have bubbles in the metering liquid, so may be faulty.

Temperature setting 2: The temp logger was in the center of an empty oven with the vent partially open during readings.  When the vent (located on the top of the unit) was opened and both shelves were filled with damp samples the top shelf appeared to run about 10 degrees cooler than the bottom shelf (according to readings from the glass thermometers).     Filled with damp samples and with the vent fully opened, average temperature for the top shelf was 33 deg. C.  And according to the glass thermometer, the temperature on the bottom shelf was about 43 degrees.  It may be worth noting that these readings were taken from glass thermometers laying directly on metal shelves, whereas the samples themselves were elevated off the metal shelves, somewhat, by their stems and sepals/calyx, and so likely insulated from any quick fluctuations by these things, as they would have been by the surrounding air and enclosing plastic bags.  At the last measurement, bottom shelf samples were warm, but not hot, to the touch.  Also, there was a vigorously living spider moving about in one of the bottom shelf bags.

 

 

Second barcode set

There is now a second set of barcoded adapters that allows higher multiplexing. They also appear to address the quality issues which have been observed in the second read of GBS runs.

This blog post has 1) Info on how to use the barcodes and where they are and 2) some data that might convince you to use them.

Usage

These add a second barcode to the start of the second read before the MSP RE site. The first bases of the second read contain the barcode, just like with the first read. Marco T. designed and ordered these and the info needed to order them is here: https://docs.google.com/spreadsheets/d/1ZXuHKfaR1BYPBX6g0p9GdZHp_21A3z_9pPt_aW0amwM/edit?usp=sharing

I’ve labeled them MTC1-12 and the barcode sequences are as follows.

MTC1 AACT
MTC2 CCAG
MTC3 TTGA
MTC4 GGTCA
MTC5 AACAT
MTC6 CCACG
MTC7 CTTGTA
MTC8 TCGTAT
MTC9 GGACGT
MTC10 AACAGAT
MTC11 CTTGTTA
MTC12 TCGTAAT

They are used in place of the common adapter in the standard protocol (1ul/sample). One possible use, and simplest to use as an example, would be to use these to run 12 plates in a lane. In this case you would make a master mix for the ligation of each plate which contains a different MTC adapter.

Where are they? In the -20 at the back left corner of the bay on the bottom shelf in a box that has a pink lab tape label that says something to the extent of “barcodes + barcoded adapters 1-12”. This contains the working concentration for each of the MTC adapters. Beside that is a box containing the unannealed and as ordered oligos and the annealed stock. The information regarding what I did and what is in the box is written there. The stock needs an additional 1/20 dilution to get to the working concentration

How it looks

First, the quality of the second read is just about as nice as the first read. Using fastqc to look at 4million reads of some random run:

Read one:
R1_fastqc_quality

Read two:
R2_fastqc_report

Now, for the slightly more idiosyncratic part: read counts. In short I dont see any obvious issue with any of these barcodes. I did 5 sets of 5 plates/lane. For all the plates I used the 97-192 bacodes for the Pst side. Then each plate got a differnt MTC barcode for the MSP side. Following the PCR I pooled all of the samples from the plate and quantified. Each plate had a different number of samples which I took into account during the pooling step. Here is the read counts from a randomly selected 4 million reads corrected to number of samples in that plate. Like I said it is a little idiosyncratic but the take home is that they are about as even as you might expect given usual in accuracies in the lab, my hands, and the fact that this is a relatively small sample.

Lane 1
MTC5	14464
MTC1	13518
MTC7	14463
MTC9	13448
MTC3	14232

Lane 2	
MTC10	30395
MTC6	11267
MTC2	8263
MTC4	19295
MTC8	14766

Lane 3	
MTC5	16631
MTC7	17315
MTC11	11623
MTC9	16256
MTC3	13831

Lane 4		
MTC10	11302
MTC6	12120
MTC4	10326
MTC12	18959
MTC8	12832
	
Lane 5
MTC1	13151
MTC6	13490
MTC2	12851
MTC11	12460
MTC12	17296

DSN depletion for GBS libraries

This step is mentioned in our current GBS protocol, but I forgot to upload it until now. It’s basically the same as the WGS one, with minor changes. I am attaching a couple of bioanalyzer plots of the same library before and after DSN treatment. The sharp peaks/thick bands disappearing after the DSN treatment are likely chloroplast fragments.

Continue reading

Battle of the lids!

Everyone know the foil lids are the undisputed lab champs for normal use and storage. What many people don’t know is that they are more than $2 each! Also, sometimes we can be out and they can be on backorder. Luckily, we have more than 1000 other lids on hand. The problem is that some have bad reputations. Lets find out if they are earned or not!

TLDR: Use Fisherbrand, seal with 75C lid temp.

Continue reading

Home-brew WGS library multiplexing

There are two main ways to barcode WGS libraries so that they can be run together on a same lane:

– In-line barcodes: unique sequences are located at the very end of one or both adapters. This sequence will be at the very beginning of each read from a given library. This is the barcode system that is normally used fro GBS libraries as well.

– Indices: barcodes are in the middle of one or both adapters. These barcodes are read through an independent round of sequencing. For a paired-end library you would have therefore two rounds of sequencing of your fragment and a third round of sequencing for the index (and I guess a fourth one as well, if you have double indices). This is the system used in most commercial kits.

Continue reading

PstI-MspI GBS protocol

The “protocol” that we have been using is available here as a Google doc. I use quotes there because, as you will see, there is no single protocol used by the lab to date. Without analyzed sequence data yet to compare the described protocols, the best advice I can give to you is to pick a sensible pipeline for your needs, taking into consideration the time/effort/desired output that works for you, and apply that pipeline consistently to all samples for the same project. All sets of methods described have resulted in libraries that pass QC and generate a sufficient reads during sequencing. Good luck!

GBS barcodes 1-96 stocks

In the freezer there is a deep well plate with annealed barcodes 1-96, labelled plate 1C. Recently Dan Bock and myself used it to make stock concentration plates. Plate 1C has been qubit three times, here is an excel spreadsheet with the concentrations. They are in chronological order (i.e. Dan’s measurements are the most recent). I’ve also included an average column.

qubit-1-96GBS

So if you need to make your own plates for barcodes 1-96, you can use this. Dan Bock recommends diluting to a 4X concentration, qubiting that plate, then using that to dilute to 1X concentration if you want to be more accurate.

Depletion of repetitive sequences – WGS libraries

As you know, the sunflower genome contains a large amount of repetitive sequences, that is why it is so big and so annoying to sequence. I have been working for a while on optimizing a depletion protocol, to try to get rid of some repetitive sequences in NGS libraries (transposons, chloroplast DNA…). Continue reading