Homework for you

Fsl First Vertex Analysis Essay

Category: Essay

Description

Case Vertex Essay - 11479 Words

Case Vertex

GARY PISANO LEE FLEMING ELI PETER STRICK

I’ve never made a bad decision. I’ve just had bad data. — Joshua Boger, CEO and Founder of Vertex Pharmaceuticals Like many New Englanders on this bright October morning in 2003, Josh Boger, CEO of Vertex Pharmaceuticals, had been up until 2:00 a.m. the previous evening watching the Boston Red Sox playoff game. The game, predictably, ended in a heartbreaking loss for the Red Sox, but Boger’s lingering disappointment (and regret over staying up so late) quickly faded as he strode down the halls of the Cambridge, Massachusetts company he had founded 15 years earlier. Vertex had four promising drugs in various stages of clinical development, and Boger was excited by the possibilities: “The portfolio is playing out exactly as we hoped. We’ve got a stream of revenues from our partnered project that will help fund our development costs. There are multiple paths for us to become profitable. We’re in a position to choose.” While the company had revenue from various corporate partnerships and roughly $600 million in cash and short-term investments on its balance sheet, it was unlikely that the company could fund more than two of its four primary development projects.1 Therefore, Boger and Vicki Sato, Vertex’s president, had to decide which two projects should be funded. This was not an easy question, as each project had strong proponents in various parts of the organization. A second decision for the company was what to do with the two projects that did not receive funding. Again, opinions differed within Vertex, with some favoring licensing out the projects while others believed Vertex should hold the two projects as backups in case something happened to the others. The implications of these decisions were enormous, as the chosen candidates would be the first products Vertex attempted to bring through development, and hopefully onto the market, on its own.

1 According to its third-quarter 2003 10-Q statement, Vertex had $77.5 million and $518.2 million in cash and marketable

securities on its balance sheet, respectively. ________________________________________________________________________________________________________________ Professors Gary Pisano and Lee Fleming and Research Associate Eli Peter Strick prepared this case. HBS cases are developed solely as the basis for class discussion. Certain details have been disguised. Cases are not intended to serve as endorsements, sources of primary data, or illustrations of effective or ineffective management. Copyright © 2004 President and Fellows of Harvard College. To order copies or request permission to reproduce materials, call 1-800-545-7685, write Harvard Business School Publishing, Boston, MA 02163, or go to http://www.hbsp.harvard.edu. No part of this publication may be reproduced, stored in a retrieval system, used in a spreadsheet, or transmitted in any form or by any means—electronic, mechanical, photocopying, recording, or otherwise—without the permission of Harvard Business School.

Copying or posting is an infringement of copyright. Permissions@hbsp.harvard.edu or 617-783-7860.

Vertex Pharmaceuticals: R&D Portfolio Management (A)

os t
9-604-101
REV: JUNE 20, 2006

Vertex Pharmaceuticals: R&D Portfolio Management (A)

The Pharmaceutical Industry2

Founded in 1989, Vertex’s age and size caused many to categorize it as a biotechnology firm.7 However, because the company focused on chemically synthesized molecules rather than biologics, Vertex generally viewed itself as a classical pharmaceutical company. In fact, many of the company’s 2 For a more extensive overview of the pharmaceutical industry, see Stephen Bradley and James Weber, “The Pharmaceutical

Industry: Challenges in the New Century,” HBS Case No. 703-489 (Boston: Harvard Business School Publishing, 2003).

3 Herman Saftlas, “Healthcare: Pharmaceuticals,” Standard &.

Please sign up to read full document.

 Network Flows Case Study MTH221 University of Phoenix February 23, 2014 Network Flows Case Study The following is a series of case studies on Network Flows. Network flows can be representative of many types of systems. Whether the network is used to transmit data from computer to computer or server to server, transfer goods across the county, or deliver liquid flows to the desired location, networks must be studied to find the most efficient path for the given media to travel across. Locating the most efficient path for media allows systems to run at maximum efficiency without overloading any particular portion of the network, which would slow or even inhibit delivery to the desired destination. Example 1 Example 1 introduces the scenario of Joe the plumber, and his piping network. Joe states that his system of connections can deliver a flow of 3 gpm from point s to point t through his series of connections. By studying his system on a capacitated s,t graph the flow tolerances can be mapped from point to point. Point s supplies 4 gpm to point a, point a then splits into two edges to points b and d which can flow 3 and 2 gpm respectively. Throughout the network there are differing capacities among the connections however, combined the edges always equal a flow of ≥3. This implies that the network will be able to deliver the necessary flow demands to t. Example 2 Example 2.

689 Words | 5 Pages

Case Assignment 1 (23rd September, 2014) Due: 15th October 2014 in class Centre for the Arts (Case 6 in the case book) Case Questions: (You do not have to answer them in Q&A format. Please read the note: “How to write an analysis of a case ”) 1. As Erin White, what recommendations would you make that might result in achieving break even for the scheduled Halo performance? 2. As Erin White, what recommendations would you make to improve seat revenue for the following two theatrical performances? 3. As Debbie Slade, what role do you think theatrical performances should play in future programs? Explain. 4. What should Debbie Slade do? Guidelines Please read the note “ How to write an analysis of a caseCases will be evaluated as if teams are in a work situation and have been given the responsibility by senior management to analyze a marketing related business situation and to develop recommendations for the business. Generally, cases will be graded on the following criteria: • Quality of analysis • Application of marketing and business terminologies/ theory • Adherence to the requirements All cases . unless otherwise specified, are to be submitted typewritten (letter quality). 12 pts. Times New Roman, 1.5 spaces on 8 ½ x 11 inch white paper with one inch margins. Only stapled hard copies will be accepted. Maximum 6 pages (excluding exhibits). Teams.

353 Words | 2 Pages

Abstract Cold case investigation is a growing concern due to increased numbers of unsolved cases and pop culture appeal. This paper will walk you through what a cold case investigation is, how one begins, and factors such as the use of volunteers can effect the outcome of a cold case investigation. Each year the number of unsolved cases in the U.S. goes up, but with limiting funding and lack of manning something has to be done to keep the numbers down. A dedicated cold case unit is the best answer to this problem. Cold Case Investigation Approximately one-third of all homicides in the United States are not cleared within the first year of being committed. These cases are dubbed cold cases after active investigation has been terminated for any of several reasons. If investigators lose the trail or cannot come up with enough evidence, witnesses, or a suspect the case may be closed. More important or high publicity cases may become a priority and with lack of a trail a case is closed. These cases can be placed on a back burner until time and manning is available to conduct a more thorough investigation and continue where previous investigation left off. The low clearance rate has also been attributed to the number of experienced.

1305 Words | 4 Pages

Case management Objective: The court has a duty to actively manage cases pursuant to rule 25(rule of civil procedure 2000)…. what is the overriding objective of managing cases . * (1) These Rules are a new procedural code with the overriding objective of enabling the court to deal with cases justly. * (2) Dealing with a case justly includes, so far as is practicable – * (a) ensuring that the parties are on an equal footing; * (b) saving expense; * (c) dealing with the case in ways which are proportionate – * (i) to the amount of money involved; * (ii) to the importance of the case ; * (iii) to the complexity of the issues; and * (iv) to the financial position of each party; * (d) ensuring that it is dealt with expeditiously and fairly; and * (e) allotting to it an appropriate share of the court’s resources, while taking into account the need to allot resources to other cases . Application by the court of the overriding objective 1.2 The court must seek to give effect to the overriding objective when it – (a) exercises any power given to it by the Rules; or (b) interprets any rule subject to rules 76.2, 79.2 and 80.2. Duty of the parties 1.3 The parties are required to help the court to further the overriding objective. Court’s duty to manage cases 1.4 (1) The court must further the.

593 Words | 3 Pages

UNDERSTANDING THE CASE PROCESS INTRODUCTION The purpose of this section is to help you to understand what a case is and how you, as a student of business, can more effectively prepare your answers and benefit from a case discussion. The material covered in this section includes: 1. Understanding what a case is. 2. Reading a case effectively. 3. Analysing and preparing for a case discussion. 4. Reporting your case findings. 5. Discussing the case . 1. Understanding what a case is. Socrates, the teacher-philosopher, used questions rather than statements of fact, to lead his students through the reasoning process. As a student we can try and help you learn in several ways, and the case approach does much the same thing that Socrates did by asking questions. You, as the student, are not told the answer to a problem, but will have to build your own solution. This is a type of active learning where your instructor will guide you in your case discussion to the point at which you discover the solution for yourself. So cases are a problem-solving situation. You are given a story, based on a real business situation, and then you are asked questions or posed a problem based on the situation outlined in the case . Some cases are short, and focused on a very.

1344 Words | 5 Pages

Tutorial 3 Making the Business Case Multiple Choice Questions 1. A mature, stable industry may need IS to ________ the current pace of operations, while a company in a newer, more volatile industry (i.e. a cellular phone company) may find it more important to __________________ technology. a. reduce, outsource b. maintain, be on the leading edge of c. advance, reduce d. accelerate, maintain 2. Porter’s five forces include: a. competitors, new entrants, customers, suppliers, distributors b. customers, suppliers, distributors, middlemen, stockholders c. stockholders, suppliers, distributors, customers, competitors d. competitors, customers, suppliers, new entrants, substitutes 3. Probably the most important factor that can affect IS investment is the nature of ________________ in the industry. a. competition or rivalry b. technology c. customer service d. marketing 4. When making a successful business case . "Arguments based on data, quantitative analysis, and/or indisputable factors" are known as: a. arguments based on faith. b. arguments based on fear. c. arguments based on fact. d. None of the above. 5. What type of argument is this: "This analysis shows that implementing the inventory control system will reduce errors by 50% and pay for itself within 18 months"? a. An argument based on faith b. An argument based on fear c. An.

1209 Words | 7 Pages

PressReleasePing Kensington Tablet Cases Make Holiday Shopping a Snap From rugged protection to extreme personalization, Kensington tablet cases have it all, and at budget-friendly prices Redwood Shores, CA, October 8, 2014 /PressReleasePing/ - Kensington, a worldwide leader in delivering smart. safe. simple.™ computing accessories, offers a host of tablet cases that ease holiday shopping in a world that is always wired, from classroom, to dining room, to boardroom. Kensington tablet cases address a variety of needs, including rugged protection, power consumption and savings, personalization, ergonomics, and convenience. Ranging from $14.99 to $99.99, the cases deliver value at budget-friendly prices. • Trapper Keeper® Universal Case for 9” and 10” Tablets (Blue - K97326WW), $29.99 • Pee Chee™ Universal Case for 9” and 10” Tablets (Yellow - K97332WW), $29.99 • Composition Book Universal Case for 9” and 10” Tablets (Black - K97333WW), $29.99 • Trapper Keeper™ Universal Case for 7” and 8” Tablets (Red - K97329WW), $24.99 • Composition Book Universal Case for 7” and 8” Tablets (Green - K97334WW), $24.99 You’ve seen the designs before. And now they’re available in a cool, lightweight case for tablets. Kensington is drawing on its nostalgic past to create a case for today with the new.

1125 Words | 3 Pages

CASE STUDY WORKSHEET This form can be used to organize your thoughts about a case . As you perform your analysis remain open to the fact that your interpretation of the facts may change and therefore you should constantly revisit your answers. Define the Problem: Describe the type of case and what problem(s) or issue(s) should be the focus for your analysis. Cases tend to fall into one of three categories that sometimes overlap: Ø Decision Cases describe a decision faced by the case protagonist (character). The student ultimately must choose among a finite set of distinct decision alternatives. Ø Problem Cases require a student to diagnose a problem in a business case and to formulate possible solutions. Ø Evaluation Cases illustrate a business success or failure. The student analyzes the underlying reasons for that success or failure to arrive at management lessons. Evaluation Case An analysis of GE s revitalization efforts during the tenure of their infamous CEO Jack Welch. This detailed examination of the impact of Mr. Welch s leadership style and the changes he implemented. The learning objective is to understand the evaluation of Welch s strategy and how it impacted the company. There is no question that Welch had a positive and important effect on GE s long-term success. It is, however, important to note that this.

1844 Words | 7 Pages

Other articles

Structural Analysis Practical

Structural Analysis Practical Practical Overview

In this practical you will learn to use the main tools for structural analysis: FAST (tissue-type segmentation), FIRST (sub-cortical structure segmentation) and FSL-VBM (local grey matter volume difference analysis). In addition there are several optional extensions, including SIENA, that will be relevant to those with interests in particular types of structural analysis. We advise people to pick and choose based on their own particular interests; everyone should do FAST, but after that the parts are quite separate and can be done in any order (e.g. people particularly interested in VBM might want to do that before the section on FIRST).

FAST Perform tissue-type segmentation and bias-field correction using FAST. FIRST Use FIRST for segmentation of sub-cortical structures. Introduces basic segmentation and vertex analysis for detecting group differences. FSL-VBM Perform an FSL-VBM (voxel-based morphometry) analysis for detecting differences in local grey matter volume.

Optional extensions:

SIENA Use SIENA for detecting global grey-matter atrophy in longitudinal scans. SIENAX An introduction to the cross-sectional version of SIENA. FIRST Revisited Looking at the "uncorrected" FIRST outputs. Multi-Channel FAST An introduction to the multi-channel version of FAST - for use with multiple acquisitions (e.g. T1-wt, T2-wt, PD. ).

In this section we segment single T1-weighted images with FAST and look at how to quantify the grey matter volume and amount of bias field present.

FAST Input Preparation - BET

To begin with we will prepare data for FAST; this requires running BET for brain extraction. In addition, just for this practical, we will also extract a small ROI containing a few central slices so that FAST only takes a minute to process the data, instead of 10-15 minutes for a full brain.

Run BET on the input image structural to create structural_brain (type Bet for the GUI [ Bet_gui on a Mac], or bet for the command-line program).

Look at your data

View the output to check that BET has worked OK (e.g. change the colourmap for structural_brain to say Red-Yellow):

Close FSLView and create a cut-down version (containing a few central slices) of the brain-extracted image using the region-of-interest program fslroi. This will let you try out some of the FAST options without having to wait more than a minute each time.

Open structural_brain_roi in FSLView to see the cut-down image. See how few slices are left.

Image with Bias Field

You will also find an image in this directory called structural_brain_roi_inhomog which contains the same section of the same brain but with a different bias field or inhomogeneity (looking a bit more like a surface-coil acquisition).

Add structural_brain_roi_inhomog to the already open FSLView (or open a new one and load structural_brain_roi first) and then look at the difference between these images. Note how both grey matter and white matter are darker in the left anterior portion of the inhomogeneous image.

FAST - Single Channel Example

Run FAST (separately) on both structural_brain_roi and structural_brain_roi_inhomog. Use the GUI ( Fast [or Fast_gui on a Mac]) and turn on the "Estimated bias field" button (which saves a copy of the bias field). For the structural_brain_roi_inhomog case also open the "Advanced Options" tab and change the "Number of iterations for bias field removal" to 10 to account for the strong bias field in this case. Finally, don't forget to check that the output name is different for the two runs! Once this is set up press "Go" for both - they should only take a minute to run.

View and compare the two *_seg.nii.gz output segmentations. Try using different colour maps for the segmentations when viewing the results.

Partial Volume Segmentation

Now let's look at the partial volume segmentations. View the different outputs in FSLView by first loading structural_brain_roi. then loading the PVE (Partial Volume Estimate) images as overlays, adjusting the overlay transparency as necessary. Note that you can tell FSLView what colourmaps and intensity ranges to use from the command line:

Identify which PVE component is the grey matter. Choose a voxel on the border of the grey matter and look at the values contained in the three PVE components. The values represent the volume fractions for the 3 classes (GM, WM, CSF) and should add up to one. Now pick a point in the middle of the grey matter and look at the three values here.

The PVE images are the most sensitive way to calculate the tissue volume which is present. For example, we can find the total GM volume with fslstats by doing:

The first number reported by fslstats gives the mean voxel GM PVE across the whole image and the third number gives the total volume of the image (in mm 3 ), so multiplying these together gives the total GM volume in mm 3 (for more details on fslstats just type fslstats to see its usage description).

Bias Field Correction

Now let's look at the bias field outputs - structural_brain_roi_bias and structural_brain_roi_inhomog_bias (these are FAST's estimates of the bias fields). View these in FSLView and set the display ranges to be equal for both images (e.g. 0.6 to 1.4). Notice how much stronger the second one is.

Advanced: Work out how to use fslmaths to compare the estimated bias field between these cases. Create restored images using these bias field estimations and test how well this difference explains the additional bias field (which was artificially added in this case).

In this section we lead you through examples of subcortical structure segmentation with FIRST, and some post-fitting statistical analyses.

Segmentation of structures

We begin by segmenting the left hippocampus and amygdala from a single T1-weighted image (from the OASIS database). The image is con0047_brain.nii.gz.

Load this into FSLView to start with to see the image.

Note that although this is not normally done, this image has had brain extraction run on it. This is due to the anonymisation done to the original image.

To perform the segmentation of the left hippocampus and amygdala we simply need to run one command:

This command (or script) will run several steps for you and has several options. It will take about 4-5 minutes to run, so while it is running read through the following description.

Options used in run_first_all

-i specifies the input image (T1-weighted) -o specifies the output image basename (extensions will be added to this) -b specifies that the input image has been brain extracted -s specifies a restricted set of structures to be segmented (just two in this case) -a specifies the affine registration matrix to standard space (optional)

The run_first_all script uses the best set of parameters (number of modes, intensity reference) to run for each structure, as determined by empirical experiments. Therefore it is not necessary to specify these values when running the method.

Normally the affine registration would be run as part of this script (just leave off the -a option and it will be done automatically), but it has been pre-supplied here in order to save time - as the registration takes about 6 minutes.

We will now go through how this script works and what to look for in the output.

Check the registration

Load the image con0047_brain_to_std_sub.nii.gz together with the 1mm standard space template image. Look at the alignment of the subcortical structures. It should be quite close but we do not expect it to be perfect.

This registration is normally created by run_first_all as the initial stage, but has been included here from a previous run to save time. The registration should always be performed using the tools in FIRST since it does a special registration, optimised for the sub-cortical structures. It begins with a typical 12 DOF affine registration using FLIRT, but then refines this in a second stage with a sub-cortical weighting image that concentrates purely on the sub-cortical parts of the image. Thus the final registration may not be as good in the cortex but will better fit the sub-cortical structures. However, this registration only removes the global affine component of the differences in the structures and hence will not be that precise. In addition it, crucially, leaves the relative orientation (pose) between the structures untouched.

Always make sure you check that the registration has worked before looking at other outputs.

We will now move onto looking at the other outputs which should have been generated by run_first_all at this point. If the run_first_all command has not finished have a quick look at the FIRST documentation page.

Before doing anything else we will check the output logs to see if any errors have occured. Do this with the command:

If everything worked well you will see no output from this, otherwise it will show the errors. If any errors are shown, ask a tutor about them. You should always check the error files in the log directories for FIRST and other FSL commands that create log directories like this (e.g. TBSS, FSL-VBM, BEDPOSTX, etc.).

Boundary corrected segmentation output

In FSLView, open the image con0047_brain and add the image con0047_all_fast_firstseg on top.

This *_firstseg image shows the combined segmentation of all structures based on the surface meshes that FIRST has fit to the image. It is in the native space of the structural image (not in the standard space, although the registration before was required to move the model from the standard space back into this image's native space).

As converting the underlying FIRST meshes to a voxel-based image can create overlap at the boundaries, these boundary voxels have been "corrected" or re-classified by run_first_all using the default method (here it is FAST - which classifies the boundary voxels according to intensity). Now look at the uncorrected segmentations with the following:

Each structure is labeled with a different intensity value inside and 100 + this value for the boundary voxels (the con0047_all_fast_origsegs image is a 4D image with each structure in a different volume). The intensity values assigned to the interior of each structure is given by the CMA labels.

Have a look at these images to see how good the segmentation is. Play with the transparency settings (or turn the segmentation on and off) to get a feeling for the quality.

The corrected image ( *_firstseg ) is normally the one that you would use to define an ROI or mask for a particular subcortical structure. For more details on the uncorrected image ( *_origsegs ) -- see the optional practical at the end.

Vertex Analysis using first_utils

Vertex analysis (or shape analysis) looks at how a structure may differ in shape between two groups (e.g. patients and controls). It looks at the differences directly in the meshes, on a vertex by vertex basis. This is different from using a whole-structure summary measure like volume, as it allows us to visualise the region of the shape that differs as well as the type of shape difference.

first_utils tests the differences in vertex location - here we will look at the difference in the mean vertex location between two groups of subjects, but it can also look for correlations. It projects the vertex locations onto the normal vectors of the average surface, so that it is sensitive to changes in the boundary location.

Here we will use an example dataset consisting of 8 subjects (5 controls and 3 Alzheimer's patients) which we will do an analysis on. As the numbers are lower it will have fairly low statistical power, but in this case it still shows a clear effect. A full analysis, on a larger set of subjects, would proceed in exactly the same way.

List the files in this directory - we have already run FIRST on each subject in order to get a segmentation of the left hippocampus. So you will see files such as:

most of them should be familiar from the previous example. Because only a single structure was run, the uncorrected segmentation is saved as con0047-L_Hipp_first and the boundary corrected segmentation is saved as con0047-L_Hipp_corr (rather than the names used before in the case of multiple structures). However, for vertex analysis we will be using the .bvars files as they contain the information about the sub-voxel mesh coordinates.

Running vertex analysis

In general, to run shape analysis, you need to do the following:

  • To begin with, run FIRST on all subjects (this has already been done for you to save time). If you were running this yourself you would do it in the same way that we did in the previous section, specifying what structure(s) you are interested in (to do all 17 structures just leave out the -s option). We then use the .bvars files for the vertex analysis.
  • Check that the segmentations worked. In order to visualise the segmentation outputs of FIRST on a large number of subjects it is useful to generate summary reports that can be assessed efficiently. This can be easily done using first_roi_slicesdir. which shows an ROI (with 10 voxel padding) around the structure of interest for each subject, summarised into a single webpage. In this case run: and then view the output index.html in a web browser (it will be created in a subdirectory called slicesdir/ ). Check that none of the segmentations have failed; make sure that you look at the axial, coronal and sagittal slices.
  • Combine all the mode parameters ( .bvars file) into a single file. Each structure (model) that is fit with FIRST will generate a separate .bvars file. For a given structure (e.g. hippocampus) combine all the relevant .bvars files using the concat_bvars script. Note that the order here is very important, as it must correspond to the order specified in the design matrix to be used later for statistical testing. For this example, combine the .bvars files (all of the left hippocampi) using the command: which (due to alphabetical ordering) puts the 5 control subjects first, followed by the 3 subjects with the disease.
  • Create a design matrix. The subject order should match the order in which the .bvars were combined in the concat_bvars call. The design matrix is most easily created using FSL's Glm tool (a single column in this case). To do this, start the Glm GUI. First, set the # timepoints option to be 8 (the number of subjects we have in this example). Next, choose the Higher-level/non-timeseries design option from the top pull down menu in the small window.

In the bigger window (of the Glm GUI) set the values of the EV (the numbers in the second column ) to be -1 for the first five entries (our five controls) and +1 for the next three entries (our three patients). Leave the group column as all ones. Once you've done this, go to the Contrasts and F-tests tab. The default t-contrast here is fine (there's not much else you can do with a single group difference EV) but we also need to add an F-test. So change the number in the F-tests box to 1, and then highlight the button on the right hand side (under F1 ) to select an F-test that operates on the single t-contrast. This F-test will be the main contrast of interest for our vertex analysis as it allows us to test for differences in either direction.

When this is all set up correctly, save everything using the Save button in the smaller Glm window. Choose the current directory and use the name design_con1_dis2 (as we will assume this is the name used below, although for your own studies you can use any name of your choice). Now exit the Glm GUI.

We are now ready to run first_utils and perform the vertex analysis.

We will do the analysis using --useReconMNI to reconstruct the surfaces in MNI152 space (though note that an alternative would be to reconstruct the surfaces in the native space using --useReconNative ).

Perform the first part of vertex analysis using the command:

If you are running the above command on a personal install of FSL, it may fail unless FSL is installed at /usr/local/fsl .

This first_utils command uses the combined bvars input, created above with concat_bvars. and the design matrix design_con1_dis2.mat. The other options specify that this command is to prepare an output for vertex analysis (since it can also do other things) in standard space ( --useReconMNI ).

Once first_utils has run you are now ready to carry out the cross-subject statistics. We will use randomise for this, as the FIRST segmentations are unlikely to have nice, indepedent Gaussian errors in them. Normally it is recommended to run at least 5000 permutations (to end up with accurate p-values), but with a small set of subjects like this there is a limit to how many unique permutations are available, so it will run all of them.

For multiple-comparison correction there are several options available in randomise and we will use the cluster-based one here ( -F ), although other options may be better alternatives in many cases. The call to randomise (using the outputs from first_utils. which includes a mask defining the boundary of the appropriate structure, as well as the design matrix and contrasts formed above) is:

Viewing vertex analysis output

The most useful output of randomise is a corrected p-value image, where the values are stored as 1-p (so that the interesting, small p-values appear "bright"). The corrected p-value file is the one containing corrp in the name. This correction is the multiple-comparison correction, and it is only this output which is statistically valid for imaging data - uncorrected p-values should not be reported in general, although they can be useful to look at to get a feeling for what is in your data. The statistically significant results are therefore the ones with values greater than 0.95 (p con1_dis2_L_Hipp_rand_clustere_corrp_fstat1.

To view the data in FSLView, on top of the standard brain, do the following: Note that this specifies the display range (0.95 to 1.0) and a useful colourmap (Red-Yellow) in order to easily see the results.

Find the hippocampus in this image and look to see where the significant differences in shape have been found using this vertex analysis. Normally we would not expect to find much in a group of 8 subjects, but these were quite severe AD cases and so the differences are very marked.

Some notes for running vertex analysis in practice
  • To run vertex analysis, you will need the .bvars files output by FIRST and a design matrix. These contain all the information required by first_utils .
  • It sometimes may be desirable to reconstruct the surfaces in native space (i.e. without the affine normalization to MNI152 space). To do this, instead of --useReconMNI. use the --useReconNative and --useRigidAlign options.
  • When using the --useRigidAlign flag, first_utils will align each surface to the mean shape (from the model used by FIRST) with 6 degrees of freedom (translation and rotation). The transformation is calculated such that the sum-of-squared distances between the corresponding vertices is minimized. This command is needed when using --useReconNative. however, can be used with --useReconMNI to remove local rigid body differences.
  • The --useScale flag can be used in combination with --useRigidAlign to align the surfaces using 7 dof. --useScale will indicate to first_utils to remove global scaling.
  • More details and guidelines for your analysis are contained in the FIRST documentation page .

In this section we look at a small study comparing patients and controls for local differences in grey matter volume, using FSL-VBM. Most of the steps have already been carried out, as there isn't enough time in this practical to run all of the registrations required to carry out a full analysis from scratch.

Do an ls in the directory. Note that we have renamed the image files with some prefixes so that all controls and patients would be organised in "blocks". This is to make the statistical design easily match the alphabetical order of the image files (who will be later concatenated to be statistically analysed).

We have 10 controls and 8 patients and wish to carry out a control>patient comparison. First, we need to define the statistical design, which here will be a simple two-tailed t-test to compare both groups. For this, use the Glm GUI to generate simple design.mat and design.con files, using the higher-level/non-timeseries tab in the GLM setup window. At this point, you need to enter the appropriate overall number of subjects as inputs in the GLM setup window (here n=18, then press enter), and then use the wizard button of the GLM setup window with the "two groups, unpaired option" and appropriate number of subjects for the first group (here ncontrols=10). If the design looks correct, then save it by pressing "save" in the GLM setup window and give it the output basename of "design". In this analysis, only the design.mat and design.con files will be used.

Moreover, as we have more controls than patients, you will need to list the subjects used for the creation of the study-specific template by missing out the last 2 controls for instance (con_3699.nii.gz and con_4098.nii.gz), so that the number of controls used to build this study-specific template matches the number of patients in the template_list text file:

Preprocessing

We first ran the initial FSL-VBM script, fslvbm_1_bet. This moved all the original files into the origdata folder; to see what they all look like, view the following in a web browser:

The fslvbm_1_bet command has also created some brain-extracted images. We actually ran fslvbm_1_bet both with the 'default' -b option and then, because the original images have a lot of neck in them, which was often being left in by the default brain extractions, we ran using the -N option. Compare the different results from the two options by loading in the two web pages:

It is very obvious which option is working well and which one isn't!

Next, all the brain images are segmented into the different tissue types, and then the study-specific GM template is created, by registering all GM segmentations to standard space, and averaging them together. The command used was (don't run this!):

You can view all of the alignments to the MNI152 initial standard space by running the following, and turning on FSLView movie mode:

and then view the alignment of the study-specific template to the MNI152 standard space with:

Finally, the registrations to the new, study-specific, template were run for all subjects, and modulated by the warp field expansion (Jacobian), before being combined across subjects into the 4D image stats/GM_mod_merg. An initial GLM model-fit is run in order to allow you to view the raw tstat images at a range of potential smoothings. This was achieved by running (don't run this!):

So now you can have a look at the initial raw tstat images created at the different smoothing levels, pick the one you "like" best.

The different images that you can see in the stats directory are:

GM_mask the result of thresholding the mean (across subjects) aligned GM image at 1% and turning into a binary mask. GM_merg a 4D image containing all subjects' aligned GM images. GM_mod_merg the same as above, but after the GM images have been "modulated by the warp field Jacobian" (adjusted for warp expansion/contraction). GM_mod_merg_s2 / 3 / 4 the same as above, but after Gaussian smoothing of 2, 3 and 4mm sigma. GM_mod_merg_s2_tstat1 / s3 / s4 the raw t-statistic images from feeding the smoothed datasets into a GLM via randomise. design.mat / design.con the design matrix and contrast file specifying the cross-subject model that is fit to the data by randomise. template_GM the study-specific GM template that was derived as part of the FSL-VBM analyses, and to which all subjects' GM images were finally aligned to.

You are now ready to carry out the cross-subject statistics. We will use randomise for this, as the above steps are very unlikely to generate nice Gaussian distributions in the data. Normally we would run at least 5000 permutations (to end up with accurate p-values), but this takes a few hours to run, so we will limit the number to 100 (to get a quick-and-dirty result). We will also use TFCE thresholding (Threshold-Free Cluster Enhancement - this is explained in the randomise lecture) which is similar to cluster-based thresholding but generally more robust and sensitive.

For example, if you decide that the appropriate amount of smoothing is with a sigma of 3mm, then the following will run randomise with TFCE and a reduced number of 100 iterations:

In this example we set the corrected p-threshold to 0.2 (i.e. 0.8 in FSLView), because of the reduced number of subjects in this example and hence low sensitivity to effect - you would not be able to get away with this in practice!

SIENA is a package for both single-time-point ("cross-sectional") and two-time-point ("longitudinal") analysis of brain change, in particular, the estimation of atrophy (volumetric loss of brain tissue).

The example data is two time points, 24 months apart, from a subject with probable Alzheimer's disease. The command that was used to create the example analysis is (don't run this - it takes too long! ):

The -d flag tells the siena script not to clean up the many intermediate images it creates - you would not normally use this. The other options are explained later.

SIENA has already been run for you. Change directory into the SIENA output directory:

In the SIENA output directory the first timepoint image is named "A" and the second "B", to keep filenames simple and short. To view the output report, open report.html in a web browser. The next few sections take you through the different parts of the webpage report, which correspond to the different stages of the SIENA analysis.

BET brain extraction results

First BET was run on the two input images, with options telling it to create the skull surface image and the binary mask image, as well as the default brain image.

Other BET options can be included in the call to siena by adding -B "betopts" - for example

on the siena command line tells siena to pass on the -f 0.3 option to BET, which causes the estimated brain to be larger if the value used is less than 0.5, and smaller otherwise.

You also might need to use the -c option to BET if you need to tell BET where to center the initial brain surface, such as when you have a huge amount of neck in the image. For example, if it looks like the centre of the brain is at 112,110,78 (in voxels. e.g. as viewed in FSLView), and you want to combine this option with the above -f option, you would add, to the siena command,

You can see the two brain and skull extractions in the webpage report. If you want to see these in more detail, open the relevant images in FSLView, for example:

Be aware that the skull estimate is usually very noisy but that it is only used to determine the overall scaling and this process is not very sensitive to the noise as long as the majority of points lay on the skull.

FLIRT A-to-B registration results

Now the two time points are registered using the script siena_flirt. This runs the 3-step registration (brains, then skulls, then brains again). The transformation is "halved" so that each image can be transformed into the space halfway between the two. The webpage report shows the alignment of the two brains in this halfway space. You need to check that the two timepoints are fundamentally well-aligned, with only small (e.g. atrophy) changes between them. Look out for mistakes such as: the two images coming from different subjects, one image being left-right flipped relative to the other one, or one image having bad artefacts.

If you want to look at the registration in more detail:

FLIRT standard space registration results

Now, if standard-space-based masking has been requested (it was in this case), the two brain images are registered to the standard brain $/data/standard/MNI152_T1_2mm_brain using FLIRT. The transforms (and their inverses) are saved. The two brains are registered separately and their transforms compared to test for consistency.

The webpage report shows the two images transformed into standard space, with the overlaying red lines derived from the edges of the standard space template, for comparison.

Field-of-view and standard space masking

If the -m option was set, a standard space brain mask is now transformed into the native image space and applied to the original brain masks produced by BET. This is in most areas a fairly liberal (dilated) brain mask, except around the eyes.

If the -t or -b options are set then an upper or lower limit (in the Z direction) in standard space is defined, to supplement the masking. This is useful, for example, to restrict the field-of-view of the analysis if you have variable field-of-view at the top or bottom of the head in different subjects.

The webpage report shows the -m brain masking in blue, the -t/-b masking in red (you can see the effect of the -b -30 option), and the intersection of the two maskings in green. It is this intersection that is what gets finally used.

FAST tissue segmentation

In order to find all brain/non-brain edge points, tissue-type segmentation is now run on both brain-extracted images. The GM and WM voxels are combined into a single mask, and the mask edges (including internal ventricle edges) are used to find edge motion (discussed below). The webpage report shows the two segmentations.

Change Estimation

The final step is to carry out change analysis on the registered masked brain images. At all points which are reported as boundaries between brain and non-brain, the distance that the brain surface has moved between the two time points is estimated. The mean perpendicular surface motion is computed and converted to PBVC (percentage brain volume change).

The webpage report shows the edge motion colour coded at the brain edge points, and then shows the final global PBVC value. To see the edge motion image in more detail:

"LOOK AT YOUR DATA" - SIENA Problem Cases

We now look at 4 examples of "problem cases" - these were real cases that occurred in one study; they illustrate some of the problems/mistakes that sometimes occur.

Open report.html in a web browser.

Look at the FLIRT A-to-B registration results. Can you tell what's wrong? If you're unsure, click here .

The subject IDs have gotten mixed up - the two timepoint images are from different subjects! (Also, BET is including too much neck, but that's not the main problem. )

Open report.html in a web browser.

Look at the FLIRT A-to-B registration results. Can you tell what's wrong? If you're unsure, click here .

One of the datasets has been left-right flipped (look at the axial slices in the registration animation), despite one of them having been marked with a right-side marker at some point. (and, yes, again BET is not working well on this data).

Open report.html in a web browser.

Look at the FLIRT A-to-B registration results. Can you tell what's wrong? If you're unsure, click here .

Both original images have some movement artefact (ringing) and are quite noisy. It's probably not worth keeping data of this quality. Look in the coronal slices in the registration animation. Also, something else is odd. the images are clearly identical - they must have originally been the SAME timepoint by mistake. The slight boundary differences must be due to slightly different BET results caused by only one of the images having the right-side marker (seen in the top BET result image).

Open report.html in a web browser.

Look at the FLIRT A-to-B registration results. Can you tell what's wrong? If you're unsure, click here .

The second dataset has bad motion artefact, and one of the datasets has been left-right flipped.

In this section we look at how SIENAX works and look at the most useful outputs. SIENAX estimates total brain tissue volume, from a single image, normalised for skull size.

Open report.html in a web browser. The example data is one time point from a subject with probable Alzheimer's disease. The command that was used to create the example analysis is (don't run this! ):

SIENAX starts by running BET and FLIRT in a manner very similar to SIENA, except that the second time point image is replaced by standard space brain and skull images. Next a standard space brain mask is always used to supplement the BET segmentation.

As before, optional Z limits in Talairach/standard space can be used to mask further.

Next, FAST is used, with partial volume estimation turned on, to provide an accurate estimate of grey and white matter volumes. In order to provide normalised volumes for GM/WM/total, the volumetric scaling factor derived from the registration to standard space is used to multiply the native volumes; the values are thus normalised for head size.

Interesting output images are (view with fslview ):

I_stdmaskbrain fully masked brain image - the input to FAST segmentation. (Note where standard-space-based masking has cutoff the bottom of the brain.) I_render the segmentation output colour-overlaid onto the input.
(If you zoom in you'll see that the colour overlay is shown in checkerboard pattern - this is an option in the overlay program, to make the overlay appear more transparent, for clarity.) I_stdmaskbrain_pve_0 (etc) the partial volume segmentation outputs.

Because we used the -r option, we also have two extra regional measurements. Use fslview to view I_vent_render ; here the CSF PVE image has been masked by a standard-space (dilated) ventricle mask, to enable SIENAX to estimate ventricular CSF (the colouring is a little hard to see as it was rendered transparently). Now view I_periph_render ; here the GM PVE image has been masked by a standard-space cortex mask to try to remove cerebellum, brain stem, ventricles and deep grey - it's not perfect but it's not bad.

Multi-channel segmentation is useful for when the contrast or quality of a single image is insufficient to give a good segmentation. Typically, this type of segmentation is not needed for healthy controls with good T1-weighted images, as the single channel results are good and are often even better than the multi channel results. However, when pathological tissues/lesions are present, or when the T1-weighted image quality is not good, multi-channel segmentation can take advantage of the extra contrast between tissue types in the different images and give better results.

In sub2_t1 and sub2_t2 are T1-weighted and T2-weighted images of the same subject. Are they well aligned? You can get an easy non-interactive combined view of two images (which must have the same image dimensions) with slices.

They look reasonably aligned in sagittal and coronal view, but axial views clearly show misalignment between scans. Before running multi-channel FAST it is necessary to use FLIRT to register the data. Start by running BET on each image to remove the non-brain structures, producing subj2_t1_brain and sub2_t2_brain. Note that it is OK if one of the brain extraction results includes non-brain matter (e.g. eyeballs) but the other is accurate, since the brain mask used by FAST will be the intersection of the two masks.

Start the FLIRT GUI:

For example, set sub2_t1_brain as the Reference image (NB: clear the existing directory name in the file browser and press enter to get to the local directory) and set sub2_t2_brain as the Input image. Set the Output image to something like sub2_t2_to_t1 and the DOF to 6. All the other FLIRT defaults should be fine, but you could save some processing time by telling FLIRT that the images are Already virtually aligned (in Advanced > Search > Images). FLIRT will take a minute or two to run.

Load sub2_t1_brain and sub2_t2_to_t1 into FSLView to check the result of the registration. Make the higher image in the list show as Red-Yellow and increase its transparency so that you can see how good the overlap is.

You can now forget sub2_t2 .

Run FAST (with the Number of input channels set to 2) on the multi-channel brain-extracted images sub2_t1_brain and sub2_t2_to_t1_brain (or whatever you called these BET outputs). Asking for the default number of classes (3 - assumed to be GM/WM/CSF) gives poor results because bits of other tissues outside of the brain are given a class - so you should run with 4 classes; then results should be good. This takes a few minutes; move on to the next part of the practical and view the results once fast has finished running.

Advanced: FAST - Other Options

If you have time to spare after finishing the other practical parts then you can come back and test the effect of various FAST options, obtained by typing:

You could also work out how to colour-overlay segmentation results onto the input image using overlay.

Uncorrected segmentation output

This follows on from the initial part of the FIRST practical above and assumes that run_first_all has been successfully run. Having considered the boundary corrected segmentation previously, we now turn to look at the uncorrected segmentation.

The uncorrected segmentation shows two types of voxels: ones that the underlying surface mesh passes through (boundary voxels) and ones that are completely inside the surface mesh (interior voxels). FIRST uses a mesh to model the structure when doing the segmentation, so converting this to a volume requires it to be split into boundary and interior regions like this.

We will now look at the uncorrected volumetric segmentations:

To view the segmentation better change the colourmap of the segmented image to "Red-Yellow" and make the "Max" display range value to 100 for this image. Note that you see the interior voxels and the boundary voxels in different colours. This is because the boundary voxels are labeled with a value equal to 100 plus that of the interior voxels. That is, the interior and boundary voxels for the left hippocampus are labeled 17 (the CMA label designation for left hippocampus) and 117 respectively.

The volume con0047_all_fast_origsegs is a 4D file containing each structure's segmentation in a separate 3D file. If you change the "Volume" control on FSLView to go from 0 to 1 then you will see the left amygdala result. These are separated in case these uncorrected segmentations overlap. Play with the transparency settings (or turn the segmentation on and off) to see how good the segmentation is.

These images require boundary correction which is done automatically by run_first_all. However, there are alternative methods for doing the boundary correction which you can specify with run_first_all or as a post-processing on the uncorrected image with first_boundary_corr. although the settings used by run_first_all have been chosen as the optimal ones based on empirical testing.