<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://hui947.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://hui947.github.io/" rel="alternate" type="text/html" /><updated>2025-04-01T02:53:19-07:00</updated><id>https://hui947.github.io/feed.xml</id><title type="html">Huiqi Yang</title><subtitle>Personal websites</subtitle><author><name>Huiqi Yang</name><email>zalavaa@ccf.org</email></author><entry><title type="html">Access detection measurements from a QuPath project in python</title><link href="https://hui947.github.io/posts/2024/04/09/" rel="alternate" type="text/html" title="Access detection measurements from a QuPath project in python" /><published>2024-04-09T00:00:00-07:00</published><updated>2024-04-09T00:00:00-07:00</updated><id>https://hui947.github.io/posts/2024/04/qupath-paquo-1</id><content type="html" xml:base="https://hui947.github.io/posts/2024/04/09/"><![CDATA[<h2 id="access-detection-measurements-from-a-qupath-project-in-python">Access detection measurements from a QuPath project in python.</h2>
<p>Iterate over each image, and, each annotation per image, to access the detections and their measurements. Put the measurements in a pandas <em>dataframe</em>.</p>

<h3 id="example">Example</h3>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="s">"""
example showing how to get detections for each annotation of an image within a project

Created on Tue Apr  9 13:53:22 2024

@author: zalavadia.ajay@gmail.com
"""</span>
<span class="kn">from</span> <span class="nn">paquo.projects</span> <span class="kn">import</span> <span class="n">QuPathProject</span>
<span class="kn">import</span> <span class="nn">pandas</span> <span class="k">as</span> <span class="n">pd</span>

<span class="n">EXAMPLE_PROJECT</span> <span class="o">=</span> <span class="s">"D:</span><span class="se">\\</span><span class="s">_TestFiles</span><span class="se">\\</span><span class="s">test-project</span><span class="se">\\</span><span class="s">project.qpproj"</span>

<span class="k">with</span> <span class="n">QuPathProject</span><span class="p">(</span><span class="n">EXAMPLE_PROJECT</span><span class="p">,</span> <span class="n">mode</span><span class="o">=</span><span class="s">'r'</span><span class="p">)</span> <span class="k">as</span> <span class="n">qp</span><span class="p">:</span>
    <span class="k">print</span><span class="p">(</span><span class="s">"Project Name: "</span><span class="p">,</span> <span class="n">qp</span><span class="p">.</span><span class="n">name</span><span class="p">)</span>
    <span class="c1"># iterate over the images
</span>    <span class="k">for</span> <span class="n">image</span> <span class="ow">in</span> <span class="n">qp</span><span class="p">.</span><span class="n">images</span><span class="p">:</span>
        <span class="c1"># annotations and detections are accessible via the hierarchy
</span>        <span class="n">annotations</span> <span class="o">=</span> <span class="n">image</span><span class="p">.</span><span class="n">hierarchy</span><span class="p">.</span><span class="n">annotations</span>
        <span class="n">detections</span> <span class="o">=</span> <span class="n">image</span><span class="p">.</span><span class="n">hierarchy</span><span class="p">.</span><span class="n">detections</span>

        <span class="k">if</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">detections</span><span class="p">)</span><span class="o">&gt;</span><span class="mi">0</span><span class="p">):</span>
            <span class="k">for</span> <span class="n">annotation</span> <span class="ow">in</span> <span class="n">annotations</span><span class="p">:</span> 
                <span class="c1"># list Comprehension
</span>                <span class="n">childrens</span> <span class="o">=</span> <span class="p">[</span><span class="n">detection</span> <span class="k">for</span> <span class="n">detection</span> <span class="ow">in</span> <span class="n">detections</span> <span class="k">if</span> <span class="n">detection</span><span class="p">.</span><span class="n">parent</span><span class="p">.</span><span class="n">name</span> <span class="o">==</span> <span class="n">annotation</span><span class="p">.</span><span class="n">name</span><span class="p">]</span>
                <span class="c1"># measurements dictionary for each detection
</span>                <span class="n">df</span> <span class="o">=</span> <span class="n">pd</span><span class="p">.</span><span class="n">DataFrame</span><span class="p">(</span><span class="n">detection</span><span class="p">.</span><span class="n">measurements</span> <span class="k">for</span> <span class="n">detection</span> <span class="ow">in</span> <span class="n">childrens</span><span class="p">)</span>
                <span class="c1"># print the total number of detections and the total number of measurements
</span>                <span class="k">print</span><span class="p">(</span><span class="n">annotation</span><span class="p">.</span><span class="n">name</span> <span class="p">,</span><span class="s">"--&gt; Number of detections: "</span><span class="p">,</span> <span class="n">df</span><span class="p">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="s">" Number of measurements: "</span><span class="p">,</span> <span class="n">df</span><span class="p">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
</code></pre></div></div>

<h3 id="links-with-more-infomration-about-paquo">Links with more infomration about paquo:</h3>

<ul>
  <li><a href="https://paquo.readthedocs.io/en/latest/index.html">https://paquo.readthedocs.io/en/latest/index.html</a></li>
  <li><a href="https://github.com/Bayer-Group/paquo">https://github.com/Bayer-Group/paquo</a></li>
</ul>

<h3 id="installation-and-setup-notes">Installation and setup notes:</h3>

<ol>
  <li>Create conda environment</li>
  <li>Install <strong><em>paquo</em></strong></li>
  <li>Update config file with path to <strong><em>QuPath</em></strong> installation directory</li>
  <li>Install <strong><em>spyder</em></strong> IDE</li>
</ol>

<blockquote>
  <p>I had some trouble using <strong><em>paquo</em></strong> via PyCharm IDE, more investigations needed for troubleshooting</p>
</blockquote>]]></content><author><name>Huiqi Yang</name><email>zalavaa@ccf.org</email></author><category term="QuPath" /><category term="paquo" /><summary type="html"><![CDATA[Access detection measurements from a QuPath project in python. Iterate over each image, and, each annotation per image, to access the detections and their measurements. Put the measurements in a pandas dataframe.]]></summary></entry><entry><title type="html">Resonance scanners are cool again, thanks to the Noise2Void</title><link href="https://hui947.github.io/posts/2023/02/01/" rel="alternate" type="text/html" title="Resonance scanners are cool again, thanks to the Noise2Void" /><published>2023-02-01T00:00:00-08:00</published><updated>2023-02-01T00:00:00-08:00</updated><id>https://hui947.github.io/posts/2023/02/N2V-For-Resonance-Scanner</id><content type="html" xml:base="https://hui947.github.io/posts/2023/02/01/"><![CDATA[<h2 id="video-rate-confocal-imaging-using-resonance-scanners-followed-by-post-processing-using-the-noise2void">Video-rate confocal imaging using resonance scanners followed by post processing using the Noise2Void.</h2>
<p>Resonance scanners, although underutilized, possess tremendous potential for high-speed acquisition in scanning systems. However, their limited popularity stems from the challenge of noise in acquired images, often requiring frame averaging or integration. Unfortunately, this compromises the speed of image capture, hindering their widespread adoption for live imaging. Thankfully, there is a solution: Noise2Void.</p>

<p>Noise2Void offers a powerful tool to address the noise issues associated with resonance scanners. By leveraging this advanced technique, we can effectively de-noise noisy images acquired at the scanner’s full speed. Consider the Leica SP8 Laser Scanning Confocal Microscope equipped with resonance scanners, where confocal time-lapse movies can be acquired at an impressive video-rate of 27 frames per second (fps).</p>

<p>With Noise2Void, the need for frame averaging or integration is eliminated, allowing high-speed imaging. Imagine capturing the intricacies of dynamic cellular processes, monitoring rapid events, and gaining real-time insights into living systems. Noise2Void unlocks the true capabilities of resonance scanners that many of us already have.</p>

<p>In conclusion, resonance scanners have been overlooked in microscopy due to noise challenges. However, by leveraging Noise2Void, we can overcome these limitations and harness the power of resonance scanners. The Leica SP8 Laser Scanning Confocal Microscope, combined with this innovative technique, offers video-rate time-lapse imaging at 27 fps.</p>

<p>Here is an example of such acquisition:</p>

<p>Time-lapse acquisition of red blood cells flowing at high speed through vasculature in a zebra fish. This time-lapse data were acquired on a Leica SP8 Laser Scanning Confocal Microscope equipped with resonance scanners, using a 25x water immersion lens.</p>
<video src="https://user-images.githubusercontent.com/10900214/216131637-ffdb309c-7df5-4824-9eff-f3f5eb549f7f.mp4" controls="controls" style="max-width: 650px;">
</video>

<p>Link to Noise2Void Paper: https://arxiv.org/abs/1811.10980</p>

<hr />
<p>Author: Ajay Zalavadia</p>]]></content><author><name>Huiqi Yang</name><email>zalavaa@ccf.org</email></author><category term="Resonance scanner" /><category term="live imaging" /><category term="Denoising" /><summary type="html"><![CDATA[Video-rate confocal imaging using resonance scanners followed by post processing using the Noise2Void. Resonance scanners, although underutilized, possess tremendous potential for high-speed acquisition in scanning systems. However, their limited popularity stems from the challenge of noise in acquired images, often requiring frame averaging or integration. Unfortunately, this compromises the speed of image capture, hindering their widespread adoption for live imaging. Thankfully, there is a solution: Noise2Void.]]></summary></entry></feed>