Schemas and propositional logic rules

How do I define this?

And any idea on how to make rules for these, on the assumption that I can have an external application like Rasa.

Hi, I would go for the following approach:

define

storyality-entity sub entity,
  plays storyality-hierarchy:superior,
  plays storyality-hierarhcy:inferior;

storyality-hierarchy sub relation,
  relates superior,
  relates inferior;

multiverse sub storyality-entity;
universe sub storyality-entity;
supercluster-complex sub storyality-entity;
...

This allows all the storyality-entity subtypes to inherit the ability to play roles in storyality-hierarchy. I’m not sure what kinds of rules you’d want to define, but I can see that a transitive hierarchy rule would be useful:

define

indirect-storyality-hierarchy sub storyality-hierarchy;

rule transitive-storyality-hierarchy:
  when {
    (superior: $x, inferior: $y) isa storyality-hierarchy;
    (superior: $y, inferior: $z) isa! storyality-hierarchy;
  } then {
    (superior: $x, inferior: $x) isa indirect-storyality-hierarchy;
  };

You could then query all storyality-hierarchy instances with the following, including the indirect ones:

match
(superior: $s, inferior: $i) isa storyality-hierarchy;

Be warned, doing general queries on transitive relations without any bound roleplayers will generate a lot of results, and this could be problematic if you have a large dataset and low machine specs. Queries with a single roleplayer bound will have much better performance:

match
$s isa planet, has name "Earth";
$i isa storyality-entity, has name $n;
(superior: $s, inferior: $i) isa storyality-hierarchy;
get $n;

I’m not sure how Rasa will affect your data model or queries.

Though I can see the advantages of having subtypes inheriting the ability to play roles in the hierarchy I can’t see anywhere something that defines the hierarchy i.e. that the multiverse is higher in the hierarchy than the Universe and so on. Also how can I say that each universe has its own laws of physics and each country has its own languages with such a schema?

Also in regards to Rasa,The question was about the Propositional Logic rules :
Propositional calculus - Wikipedia
I was looking at:

and thought that some of the rules may not be able to be implemented in Typeql directly and may require some actions to be taken by an external application.

TypeQL’s rules function on deductive reasoning, which can strongly complement the inductive reasoning capabilities of neural networks if used in tandem. If you’re interested in other logical constructs, I can’t recommend implementation strategies without specific requirements or examples. I’ll ask Cristoph, our Head of Research, to take a look at this thread as he’ll be able to comment from a more theoretical perspective.

If you want to more tightly constraint the properties of the types in your hierarchy, I would recommend a branching structure with one leaf per branch point:

storyality-entity sub entity;

multiverse sub storeality-entity;
sub-multiverse-entity sub storeality-entity;

universe sub sub-multiverse-entity;
sub-universe-entity sub sub-multiverse-entity;

supercluster-complex sub sub-universe-entity;
sub-supercluster-complex sub sub-universe-entity;
...

You can then define relations that allow any storeality-entity to be a part of a higher-rank one:

storeality-hierarchy sub relation.
  relates superior,
  relates inferior;

multiverse-content sub storeality-hierarchy,
  relates multiverse as superior,
  relates contents as inferior;

multiverse plays multiverse-content:multiverse;
sub-multiverse-entity plays multiverse-content:contents;

universe-content sub storeality-hierarchy,
  relates universe as superior,
  relates contents as inferior;

universe plays universe-content:universe;
sub-universe-entity plays universe-content:contents;
...

Finally, you can make properties inherit by assigning them to the branch entities:

sub-multiverse-entity owns law-of-physics;

or directly to the type they pertain to if you don’t want them inherited:

country owns language;

Is there something specific you’d like to do with propositional rules? (I’ve skimmed through the video you posted, And I couldn’t really see where propositional rules entered the picture )

Our rules are based on Definite/Horn clause rules coming - a fragment of first order logic. This should make it mroe general than most propositional rule systems.

Concretely, TypeQL should be able to express any rule with exactly one unnegated literal in the HEAD
i.e., no writing
p & q -> ~r
And no writing
p -> q | r

p -> q & r should be equivalent to p->q & p->r - which are both legal.

If you’re interested in more general propositional theories, that might not be as straightforward to express in TypeQL, but it would help to have more concrete examples of what you’d like to do.

What I would like to do is a system that learns through inference and by providing a small set of general rules, the system sould be able to create new rules for itself. Now as I understand it, Typedb cannot create new rules for itself without having an external application to incert said rules. That is where an application like Rasa comes in. Rasa has custom actions. Typedb could infer through propositional logic what new rules does it need to learn something new and send that information to a program like Rasa which by using a template of how rules are defined , it could inject the new rules for Typedb to learn.

Is Rasa specifically relevant or can you use any external program to learn the rules?

To concretise the discussion a little bit:
TypeQL uses Horn clause logic to represent its rules. The field of inductive logic programming studies the problem of learning rules in such highly-relational databases. Aleph is one such inductive rule learning system which learns horn rules given a template.

  • It uses prolog, but that shouldn’t be too hard to port to typeql.
  • It does indeed need a template language. In section 3. Directives, modeh specifies the then of a rule, modeb specifies what can occur in the body of a rule.
  • It is in the supervised setting, so you need to provide examples (sections 5 and 6)
  • It automatically augments the examples with all the data in your database (Section 4, background knowledge)

If you don’t want the supervised setting, there are other systems (such as WARMR) which works in the association rule mining setting.
These systems are outdated and have since evolved in many directions. To my knowledge, Popper is the most recent system. There are other learners for other kinds of logic, but it would help if we concretised the discussion a little bit.

But to answer your question, yes - it is entirely possible to use an external system to learn rules and then insert them into typedb.

While other programs could be used, Rasa offers integration with NLU programs like Spacy. Also speed is important. Prolog is really slow. I suppose Node.js / Node red could be used but I don’t know any existing projects that use TypeDB.

I have written the first 3 rules of propositional logic from Wikipedia

Could someone tell me if i am on the right track?

define

rule Modus-Poens   # Forward Chaining
when {
    $var has adjective $ad;
} 
then {
    $n isa noun $n;
    insert $n noun has adjective $ad;  # update the database with new information
}
 
rule Modus-tollens
when {
 $var has! adjective $ad;  
}
then {
    $var isa! noun $n;
}

rule Hypothetical-Syllogism
when{
   $t thing has $t2 thing;
   $t2 thing has $t3 thing; 
}
then{
    $t thinh has $t3 thing;
    insert $t thing has $t3 thing;   # update the database with new information
}

What is the intended meaning of has! and isa! ?

I thought that that ! denotes negation, so has! = has not and isa! = is not
I saw it in Define rules :: TypeDB Documentation portal

I’m afraid this is not True. Please, check the following link for the relevant documentation - Advanced patterns & queries :: TypeDB Documentation portal

has! is not part of the TypeQL language.
isa! means the type matches exactly.

On the logic:
Modus Ponens is essentially the default inference rule in TypeQL.
You’ll have a hard time explicitly implementing generalised Modus Tollens because of the way negation works in Horn clause logic. There’s no native way of inferring a negation. Rather, (through the closed world assumption), the expected way of negating a ‘proposition’ is if there is no rule causing the proposition to be true. This may not work for the Modus Tollens style reasoning.

On the larger intent of your rules:
isa is a strict relation in typeql which determines the type of each ‘thing’. The type of a ‘thing’ is specified the moment it is created, so rules which try to assign a type (or disallowing the assignment of a type) to a thing is illegal and meaningless.

tl;dr: I fear you’re jumping in a bit too fast without fully understanding the logical setting that TypeQL works in. I don’t want to jump in and say TypeQL is not suitable for your use-case, but it certainly seems unsuitable for your approach.

I was thinking that by modelling the parts of speech (Part of speech - Wikipedia), the rules of inference (List of rules of inference - Wikipedia) and a bit how the world is structured maybe a bot could be made that learns new rules and data and updates its schema as new information is presented to it.
Any ideas or guidance on how to do that?

A good start would be to define the learning problem, maybe with a concrete example. I’ll have a crack to convey what I mean by that.

  1. What is your learning setting? Supervised / Unsupervised?
  2. What is the target output? ( A rule, or a fact?)
  3. If your output is a rule, what’s your template language?

E.g. My schema could be

pos sub attribute, value string;  # verb, adj, noun, etc.
word sub attribute, value string; # the actual words in your corpus 
sentence sub relation, relates first-word;
# A sentence points to the first-word, and the rest is taken care of by the next-word relation
next-word sub relation, relates this-word, relates next-word;

# Tags a given occurrence of a word in a given sentence with a pos-tag
pos-tag sub relation, relates sentence, relates word, relates tag;

word plays ...;
pos plays ...;

inferred-in-sentence sub relation, relates sentence, relates word; 
 rule word-in-sentence:  # A rule which walks the next-word relations to add a direct relation between each word and the sentence in which it occurs
when {
...   
} 
then { (word: $w, sentence: $s) isa inferred-in-sentence; }; 

I define my problem as an association rule mining supervised learning problem which hopes to learn rules which determine the pos-tag of a given word in a given sentence. I’m hoping the dataset has some tags on the sentences so we have at least a PU learning (Positive & unlabelled) setting.

i.e.
the then must be of the form (word: $w, sentence: $s, tag: $p) isa pos-tag; (Similar to modeh in aleph)
the when of the rule may contain statements of the form: (Similar to modeb in aleph)

$v1 isa sentence;
(word: $v1, sentence: $v2) isa inferred-in-sentence;
(word: $v1, sentence: $v2) isa next-word;
(tag: $v1, sentence: $v2, word: $v3) isa pos-tag; 
...

My hypothesis would be: If I have a few sentences tagged in the database, this learner can hopefully learn enough rules to fill in the missing tags.

Obviously this is all hand-wavy, but it’s a first attempt to define the problem concretely. I would need an external learner (such as as aleph, but adapted to TypeQL) to actually learn the rules, but it should be possible.

Note that my view of rule-learning is based heavily on Inductive Logic Programming, given my background in it. (It’s also possible an abductive setting is applicable here.)
There is no modus tollens involved, since all of ILP is based on logic programming, which uses resolution (similar to modus ponens) as the only inference rule.

Quick refs:
[1] PU Learning section: One-class classification - Wikipedia
[2] Inductive logic programming - Wikipedia
[3] Abductive logic programming - Wikipedia

It turns out there have been attempts to use rule-learning in an ILP setting to do part-of-speech tagging: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=49723b5ec420c1a200fc6300a96e12b52705927d

Ok, this is a lot to process. To answer your questions though:

Since the application will be a chatbot the learning setting will initially be supervised, with the hope that at some point the chatbot will start learning without supervision. I can provide a dictionary that says if a word is a verb an adjective etc.
The target output of the system will be either natural language or TypeQL to be imported into the database.
One of the things that it should learn, during the supervised learning period, is grammar which will give it a way to create its own templates for the answer.
I also expect that Spacy ( https://spacy.io/ ) will be helpful for many of these tasks.