Skip to content

Commit

Permalink
fixed spelling, added spelling action
Browse files Browse the repository at this point in the history
  • Loading branch information
brtnfld committed Oct 21, 2024
1 parent 880d335 commit 77f5360
Show file tree
Hide file tree
Showing 10 changed files with 65 additions and 47 deletions.
18 changes: 18 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# GitHub Action to automate the identification of common misspellings in text files
# https://github.com/codespell-project/codespell
# https://github.com/codespell-project/actions-codespell
name: codespell
on: [push, pull_request]
permissions:
contents: read
jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest
steps:
- uses: actions/[email protected]
- uses: codespell-project/actions-codespell@master
with:
ignore_words_list: fom,coo,ku,inout
skip: .git, .github

32 changes: 16 additions & 16 deletions clinic/index.html

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

30 changes: 15 additions & 15 deletions clinic/index.org
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ understand the context.
: world
: hello
: finished at 08:25:33
- Noticable speedup but not a panacea
- Noticeable speedup but not a panacea
#+begin_quote
Eventually I hope to have a version of =h5pyd= that supports =async= (or maybe
an entirely new package), that would make it a little easier to use.
Expand Down Expand Up @@ -250,7 +250,7 @@ I have an application that reads groups as units and all the datasets
use the contiguous data layout. I believe this option, if available,
can yield a good read performance.
#+end_quote
- Intersting suggestions
- Interesting suggestions
- Elena suggested to first create all datasets with =H5D_ALLOC_TIME_LATE=
- Mark suggested to =H5Pget_meta_block_size= to set aside sufficient
"group space" (Maybe also =H5Pset_est_link_info=?)
Expand Down Expand Up @@ -674,7 +674,7 @@ int main()
- A :: Currently, the code will skip entries >64K, because the underlying
=H5Dcreate= will fail. (The logic and code can be much improved.) It's better
to first run =archive_checker_64k= and see if there are any size warnings.
The =h5compactor= and =h5shredder= were sucessfully run on TAR archives with
The =h5compactor= and =h5shredder= were successfully run on TAR archives with
tens of millions of small images (<64K).
** Last week's highlights
*** Announcements
Expand Down Expand Up @@ -1126,7 +1126,7 @@ However, I cannot get this to work with h5py.
#+end_quote
- [[https://meetingorganizer.copernicus.org/EGU22/EGU22-11201.html][DIARITSup: a framework to supervise live measurements, Digital Twins modelscomputations and predictions for structures monitoring.]]
#+begin_quote
DIARITSup is a chain of various softwares following the concept of ”system of
DIARITSup is a chain of various software following the concept of ”system of
systems”. It interconnects hardware and software layers dedicated to in-situ
monitoring of structures or critical components. It embeds data assimilation
capabilities combined with specific Physical or Statistical models like
Expand Down Expand Up @@ -1227,7 +1227,7 @@ HDFS?

** Tips, tricks, & insights
*** SWMR (old) and compression "issues" - take 2
- Free-space managment is disabled in the original SWMR implementation
- Free-space management is disabled in the original SWMR implementation
- Can lead to file bloat when compression is enabled & overly aggressive
flushing
- **Question:** How does the new SWMR VFD implementation behave?
Expand Down Expand Up @@ -1486,7 +1486,7 @@ DATASET "/equilibrium/vacuum_toroidal_field&b0" {
- [[https://www.youtube.com/watch?v=WlnUF3LRBj4][Awkward Array: Manipulating JSON-like Data with NumPy-like Idioms]]
- SciPy 2020 presentation by Jim Pivarski
- Watch this!
- How would you repesent something like this in HDF5? (Example from Jim's video)
- How would you represent something like this in HDF5? (Example from Jim's video)
#+begin_src python
import awkward as ak
array = ak.Array([
Expand Down Expand Up @@ -1708,7 +1708,7 @@ Next time...
retval = EXIT_FAILURE;
goto fail_write;
}
printf("Write successed.\n");
printf("Write succeeded.\n");

if (H5Fflush(file, H5F_SCOPE_GLOBAL) < 0) {
retval = EXIT_FAILURE;
Expand Down Expand Up @@ -1889,7 +1889,7 @@ This brings us to today's ...

The expression below is a [templated] Class datatype in C++, placed in a
non-contiguous memory location, requiring scatter-gather operators and a
mechanism to dis-assemble reassemble the components. Becuase of the complexity
mechanism to dis-assemble reassemble the components. Because of the complexity
AFAIK there is no automatic support for this sort of operation.

#+begin_src C++
Expand Down Expand Up @@ -2091,7 +2091,7 @@ the process writing to the file was killed.
- Face-to-face at [[https://www.iter.org/][ITER]] in Saint Paul-lez-Durance, France
- Reserve your spot before telling your friends! =;-)=

**** ASCR Workshop January 2022 on the Managment and Storage of Scientific Data
**** ASCR Workshop January 2022 on the Management and Storage of Scientific Data
- [[https://www.osti.gov/biblio/1843500-position-papers-ascr-workshop-management-storage-scientific-data][Position Papers]]
- [[https://www.osti.gov/biblio/1845705][Technical Report]]

Expand Down Expand Up @@ -2572,7 +2572,7 @@ All solutions come with different trade-offs!

** Tips, tricks, & insights
*** How do HDF5-UDF work?
Currently, they are repesented as chunked datasets with a single chunk. That's
Currently, they are represented as chunked datasets with a single chunk. That's
why they work fine with existing tools. The UDF itself is executed as part of the
HDF5 filter pipeline. Its code is stored in the dataset blob data plus metadata
and managed by the UDF handler.
Expand Down Expand Up @@ -2950,9 +2950,9 @@ if I set it to a big value (1024).

I wasn’t able to find how parallelism is exactly implemented. From the above
behaviour it looks like the file is being locked which then blocks my whole
programm, especially if the stride is big (more time for the other ranks to run
into a lock and be idle inbetween). Is that really the case? I write data
continously, so theoretically there is no need for a lock. Is is possible to
program, especially if the stride is big (more time for the other ranks to run
into a lock and be idle in between). Is that really the case? I write data
continuously, so theoretically there is no need for a lock. Is is possible to
tell the driver “don’t lock the file”?
#+end_quote
- What's a 'stride'? (not a hyperslab stride...)
Expand Down Expand Up @@ -3649,7 +3649,7 @@ fail_file:

#+END_SRC

- The ouput file produce looks like this:
- The output file produce looks like this:

#+BEGIN_EXAMPLE

Expand Down Expand Up @@ -3791,7 +3791,7 @@ export HDF5_USE_FILE_LOCKING="FALSE"
- "Partial I/O gets in the way" - sorting fields by name or offset
- Happens on each write call => overhead
- Can this be avoided? User might provide a patch...
**** [[https://forum.hdfgroup.org/t/read-write-specific-coordiantes-in-multi-dimensional-dataset/9137][Read/write specific coordiantes in multi-dimensional dataset?]]
**** [[https://forum.hdfgroup.org/t/read-write-specific-coordinates-in-multi-dimensional-dataset/9137][Read/write specific coordinates in multi-dimensional dataset?]]
- Thomas is looking for use cases from =h5py= users
#+BEGIN_SRC python

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,4 @@ The way optional operations are handled in the virtual object layer (VOL) change
The virtual file layer has changed in HDF5 1.14.0. Existing virtual file drivers (VFDs) will have to be updated to work with this version of the library.

## Virtual Object Layer (VOL) Changes
The virtual object layer has changed significantly in HDF5 1.14.0 and the 1.12 VOL API is now considered deprecated and unsupported. Existing virtual object layer connectors shoul be updated to work with this version of the library.
The virtual object layer has changed significantly in HDF5 1.14.0 and the 1.12 VOL API is now considered deprecated and unsupported. Existing virtual object layer connectors should be updated to work with this version of the library.
2 changes: 1 addition & 1 deletion documentation/hdf5-docs/hdf5_topics/DebugH5App.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Code to accumulate statistics is included at compile time by using the --enable-
| hl | No | Local heaps |
| i | Yes | Interface abstraction |
| mf | No | File memory management |
| mm | Yes | Library memory managment |
| mm | Yes | Library memory management |
| o | No | Object headers and messages |
| p | Yes | Property lists |
| s | Yes | Data spaces |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,4 @@ The way optional operations are handled in the virtual object layer (VOL) change
The virtual file layer has changed in HDF5 1.14.0. Existing virtual file drivers (VFDs) will have to be updated to work with this version of the library.

## Virtual Object Layer (VOL) Changes
The virtual object layer has changed significantly in HDF5 1.14.0 and the 1.12 VOL API is now considered deprecated and unsupported. Existing virtual object layer connectors shoul be updated to work with this version of the library.
The virtual object layer has changed significantly in HDF5 1.14.0 and the 1.12 VOL API is now considered deprecated and unsupported. Existing virtual object layer connectors should be updated to work with this version of the library.
12 changes: 6 additions & 6 deletions documentation/hdf5-docs/release_specifics/sw_changes_1.10.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Note that bug fixes and performance enhancements in the C library are automatica

The following information is included below.

* <a href="#compatiblity">Compatiblity and Performance Issues</a>
* <a href="#compatibility">Compatibility and Performance Issues</a>
* <a href="#9versus8">Release 1.10.9 versus 1.10.8]</a>
* <a href="#8versus7">Release 1.10.8 versus 1.10.7</a>
* <a href="#7versus6">Release 1.10.7 versus 1.10.6</a>
Expand All @@ -32,7 +32,7 @@ The following information is included below.

See [API Compatibility Reports for 1.10]() for information regarding compatibility with previous releases.

<h2 id="compatiblity">Compatiblity and Performance Issues</h2>
<h2 id="compatibility">Compatibility and Performance Issues</h2>

Not all HDF5-1.10 releases are compatible. Users should NOT be using 1.10 releases prior to HDF5-1.10.3. See the compatibility matrix below for details on compatibility between 1.10 releases:

Expand Down Expand Up @@ -529,7 +529,7 @@ hid\_t

Changed from a 32-bit to a 64-bit value.

hid\_t is the type is used for all HDF5 identifiers. This change, which is necessary to accomodate the capacities of modern computing systems, therefore affects all HDF5 applications. If an application has been using HDF5's hid\_t the type, recompilation will normally be sufficient to take advantage of HDF5 Release 1.10.0. If an application uses an integer type instead of HDF5's hid\_t type, those identifiers must be changed to a 64-bit type when the application is ported to the 1.10.x series.
hid\_t is the type is used for all HDF5 identifiers. This change, which is necessary to accommodate the capacities of modern computing systems, therefore affects all HDF5 applications. If an application has been using HDF5's hid\_t the type, recompilation will normally be sufficient to take advantage of HDF5 Release 1.10.0. If an application uses an integer type instead of HDF5's hid\_t type, those identifiers must be changed to a 64-bit type when the application is ported to the 1.10.x series.

New Features and Feature Sets
Several new features are introduced in HDF5 Release 1.10.0.
Expand Down Expand Up @@ -998,7 +998,7 @@ The original function is renamed to H5Fget\_info1 and deprecated.

A new version of the function, H5Fget\_info2, is introduced.

The compatiblity macro H5Fget\_info is introduced.
The compatibility macro H5Fget\_info is introduced.

H5F\_info\_t

Expand All @@ -1008,15 +1008,15 @@ The original struct is renamed to H5F\_info1\_t and deprecated.

A new version of the struct, H5F\_info2\_t, is introduced.

The compatiblity macro H5F\_info\_t is introduced.
The compatibility macro H5F\_info\_t is introduced.

H5Rdereference

The original function is renamed to H5Rdereference1 and deprecated.

A new version of the function, H5Rdereference2, is introduced.

The compatiblity macro H5Rdereference is introduced.
The compatibility macro H5Rdereference is introduced.

### Autotools Configuration and Large File Support
Autotools configuration has been extensively reworked and autotool's handling of large file support has been overhauled in this release.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ For a description of the major new features that were introduced, please see [Ne

### In the C/Fortran Interface (main library)

Folllowing are the new or changed APIs introduced in HDF5-1.12.0. Those introduced with a new feature list the specific new feature that they were added for.
Following are the new or changed APIs introduced in HDF5-1.12.0. Those introduced with a new feature list the specific new feature that they were added for.

| Function | Fortran | Description |
| ----------------------|------------- | -------------------------------------------- |
Expand Down
Loading

0 comments on commit 77f5360

Please sign in to comment.