Not sure if it fits but I still have ~20k currated step by step solution for mathematics (pedagogical math) "lying" around from my previous startup.
They are all hand currated. And could even be used for fine tuning or so.
Here are some details:
The dataset has 20.600 Abstract Exercises which turn into 1.193.958 Concrete Exercises.
An Abstract Exercise looks like this: a + b = c
A Concrete Exercise looks like this: 2 + 3 = 5
Tital compiled file size (JSONL): 11.6GB
very nice! maybe you can put this dataset in some repository like github, kaggle or hugging face, if you are not doing anything with it. Can be helpful to train models.
I like the idea of having a dataset for physics, but those entries are very basics, most of the physics happens with very complicated maths and it will be difficult to make an entry for a lot of physics.
For example, imagine the entry for the standard equation, should all the derivation and symbolic implementation done as a unique entry? It will be difficult to separate it in logical entries that reference each others, and many physical ideas are fundamentally different, leading to divergences.
I have the impression that it should be easier to just parse reference books and format each paragraph/section as an entry, and maybe build a graph. (considering the reference book as authoritative on the subject)
I guess you mean the Lagrangian of the Standard Model… which I agree, it will be daunting… although there is no limit in a json…
The idea of automatically parsing books is very nice and possibly faster, but note that:
- there are already various datasets of physics papers and such content
- the result will be quite different versus what we intent here, which is to have a high quality dataset of physics results with clear derivations (whenever derivation exist)
Maybe we can still use your idea to achieve the last point in some way… maybe there is a book that is already formatted as the dataset and we could use it as a starting point. But I don’t know any.
This is some cools work.
Not sure if it fits but I still have ~20k currated step by step solution for mathematics (pedagogical math) "lying" around from my previous startup. They are all hand currated. And could even be used for fine tuning or so.
Here are some details: The dataset has 20.600 Abstract Exercises which turn into 1.193.958 Concrete Exercises.
An Abstract Exercise looks like this: a + b = c A Concrete Exercise looks like this: 2 + 3 = 5 Tital compiled file size (JSONL): 11.6GB
And here is an explorer to see some of the data https://curriculum.amy.app/ToM
very nice! maybe you can put this dataset in some repository like github, kaggle or hugging face, if you are not doing anything with it. Can be helpful to train models.
There are only 3 entries, am I correct?
Yes, we are at very early stage. Looking for other physics experts to help increasing it.
I like the idea of having a dataset for physics, but those entries are very basics, most of the physics happens with very complicated maths and it will be difficult to make an entry for a lot of physics.
For example, imagine the entry for the standard equation, should all the derivation and symbolic implementation done as a unique entry? It will be difficult to separate it in logical entries that reference each others, and many physical ideas are fundamentally different, leading to divergences.
I have the impression that it should be easier to just parse reference books and format each paragraph/section as an entry, and maybe build a graph. (considering the reference book as authoritative on the subject)
I guess you mean the Lagrangian of the Standard Model… which I agree, it will be daunting… although there is no limit in a json…
The idea of automatically parsing books is very nice and possibly faster, but note that:
- there are already various datasets of physics papers and such content - the result will be quite different versus what we intent here, which is to have a high quality dataset of physics results with clear derivations (whenever derivation exist)
Maybe we can still use your idea to achieve the last point in some way… maybe there is a book that is already formatted as the dataset and we could use it as a starting point. But I don’t know any.