diff --git a/public/404.html b/public/404.html deleted file mode 100644 index fbe2d1f..0000000 --- a/public/404.html +++ /dev/null @@ -1,5 +0,0 @@ -404 Page not found - Dotawo Journal \ No newline at end of file diff --git a/public/UNS-logo.png b/public/UNS-logo.png new file mode 100644 index 0000000..ffb05b9 Binary files /dev/null and b/public/UNS-logo.png differ diff --git a/public/android-chrome-192x192.png b/public/android-chrome-192x192.png deleted file mode 100644 index 2d3182d..0000000 Binary files a/public/android-chrome-192x192.png and /dev/null differ diff --git a/public/android-chrome-512x512.png b/public/android-chrome-512x512.png deleted file mode 100644 index 49ee1ee..0000000 Binary files a/public/android-chrome-512x512.png and /dev/null differ diff --git a/public/apple-touch-icon.png b/public/apple-touch-icon.png deleted file mode 100644 index 1d6a770..0000000 Binary files a/public/apple-touch-icon.png and /dev/null differ diff --git a/public/article/blench/index.html b/public/article/blench/index.html new file mode 100644 index 0000000..69f2510 --- /dev/null +++ b/public/article/blench/index.html @@ -0,0 +1,5 @@ +Morphological Evidence for the Coherence of East Sudanic - Dotawo Journal

Morphological Evidence for the Coherence of East Sudanic

article⁄Morphological Evidence for the Coherence of East Sudanic
in issues⁄
\ No newline at end of file diff --git a/public/article/index.html b/public/article/index.html deleted file mode 100644 index 991e5a8..0000000 --- a/public/article/index.html +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/public/article/jakobi/index.html b/public/article/jakobi/index.html new file mode 100644 index 0000000..2081a24 --- /dev/null +++ b/public/article/jakobi/index.html @@ -0,0 +1,5 @@ +Reconstructing Proto-Nubian Derivational Morphemes - Dotawo Journal

Reconstructing Proto-Nubian Derivational Morphemes

article⁄Reconstructing Proto-Nubian Derivational Morphemes
in issues⁄
\ No newline at end of file diff --git a/public/article/norton/index.html b/public/article/norton/index.html new file mode 100644 index 0000000..161c351 --- /dev/null +++ b/public/article/norton/index.html @@ -0,0 +1,13 @@ +Ama Verbs in Comparative Perspective - Dotawo Journal

Ama Verbs in Comparative Perspective

article⁄Ama Verbs in Comparative Perspective
in issues⁄

Preliminaries

Ama is a North Eastern Sudanic language spoken in villages to the west and north-west of Dilling, near to where Kordofan Nubian languages are spoken in the north-western Nuba Mountains. “Ama” (ámá “people”) is the self-designated name of the language community identified by the ISO639-3 code [nyi] and replaces the name “Nyimang” in older sources,1 as “Ama” is the name used in local literature in the language created over the last three decades. Nyimang is an altered form of “Nyima,” one of the mountains in the Ama homeland, which is now used as the name of the branch of Eastern Sudanic consisting of Ama [nyi] and Afitti [aft]. I will assume that Nyima is one of a group of four extant northern branches of the Eastern Sudanic family, the others being Nubian, the Nara language, and Taman.2

Ama examples unless otherwise stated are from the author’s fieldwork verified with leading Ama writers who oversee literacy in the language. For vowels, I distinguish five –ATR brassy vowels ɪɛaɔʊ and five +ATR breathy vowels ieəou, as represented fluently by Ama writers using five vowel letters {aeiou} and a saltillo {ꞌ} in breathy words. For tone, Ama’s nearest relative Afitti has been described as having two contrastive tone levels,3 but Ama has three levels, which play a role in the verb system as well as the wider lexicon as shown in table 1.

kɛ́rwomannɪ́kill (factative)ɕɪ́ɛ̄do (transitive)
kɛ̄rcrane (bird sp.)nɪ̄kill (progressive 3rd person)ɕɪ̄ɛ̄say
kɛ̀raroundnɪ̀kill (progressive 1st/2nd person)ɕɪ̀ɛ̄do (intransitive)

Table 1: Level tone contrasts in Ama

A brief overview of Ama morphosyntax can be gained by locating it in the typology of Heine and Vossen,4 which assesses African languages on the presence of nominal classification, nominal case, and verbal derivation. In Ama, the role of nominal classification is limited due to a remarkable lack of nominal number affixes, although there is some differentiated grammatical behavior of rational nominals.5 However, case is extensive in Ama,6 as is typical of Nilo-Saharan verb-final languages,7 and likewise verbal derivation is extensive.

FeaturePresenceCategories
1.Nominal classificationlimitedrational
2.Nominal caseextensiveaccusative, dative, genitive, ablative, locatives
3.Verbal derivationextensivecausative, applicative, reciprocal, directional

Table 2. Ama morphosyntax

The Syntax of Ama Verbs

Ama verbs follow a syntax that is partly familiar from other Nilo-Saharan languages. It has SOV word order, although as we shall see, Ama is not strictly verb-final. It also has coverbs that occur with an inflecting light verb. As in Tama,8 most Ama verbs take their own inflections but coverbs are also seen quite frequently. Many Ama coverbs fit Stevenson’s characterization that the coverb occurs before the light verb stem ɕɪɛ “do/say” and is either an ideophone (with marked phonology such as reduplication or non-mid tone) or a word marked by the suffix -ɛ̄n (typically a borrowed verb).9 The form of the Ama coverb suffix -ɛ̄n matches the Fur co-verb suffix -ɛn ~ -ɛŋ.10 The transitivity of the predicate is distinguished in Ama by the tone on the light verb ɕɪ̀ɛ̄/ɕɪ́ɛ̄.

Intransitive coverbsTransitive coverbs
nʊ̄nʊ̄ɲ ɕɪ̀ɛ̄hopdíɟí ɕɪ́ɛ̄work
ɟɪ̀ɟɪ̀ɡ ɕɪ̀ɛ̄speak angrilyɟɛ̀rɟɛ̀r ɕɪ́ɛ̄scatter
àɽɪ̀mɛ̀ ɕɪ̀ɛ̄be angryt̪úūl ɕɪ́ɛ̄destroy
ōlɡ-ēn ɕɪ̀ɛ̄crydɪ́ɡl-ɛ̄n ɕɪ́ɛ̄gather (Kordofan Nubian *ɖigil)11
tɔ̄ɡl-ɛ̄n ɕɪ̀ɛ̄tie oneselffɔ̄ɟ-ɛ̄n ɕɪ́ɛ̄make suffer
sɛ̀ɡ-ɛ̄n ɕɪ̀ɛ̄complaintɪ̄m-ɛ̄n ɕɪ́ɛ̄finish
kɔ̄w-ɛ̄n ɕɪ́ɛ̄iron (Sudanese Arabic kowa)
rɛ̄kb-ɛ̄n ɕɪ́ɛ̄ride (Sudanese Arabic rikib)
mɪ̄skɪ̄l-ɛ̄n ɕɪ́ɛ̄give someone a missed call (S. Arabic miskil)

Table 3. Ama coverbs

While Ama’s verb-final word order and use of coverbs are reminiscent of other Nilo-Saharan languages, relative clauses in Ama are of a globally rare type. Ama uses adjoined relative clauses at the end of the main clause, and these modify the last noun of the main clause.1213

(1)

(2)

The adjoined relative clause strategy means that verbs tend not to occur in noun phrases in Ama, although for completeness we should observe that they are not entirely excluded. Since it is impossible to modify the subject of a transitive clause by an adjoined relative clause, as it is separated by another object or oblique noun, speakers consulted confirmed that it is grammatically acceptable to modify a subject noun by a progressive verb within the noun phrase as in (3), although they felt this is not used much, and I have not found examples in texts. However, verb participles marked by the suffix -ɔ̀ (or -ò by vowel harmony) also occur in noun phrases, including in texts as in (4) and (5).

(3)

(4)

(5)

Nevertheless, the adjoined relative clause strategy is an innovative feature of Ama that tends to place information about participants outside the noun phrase where they are mentioned. A similar distribution applies to the expression of number. Within the noun phrase, there are no number affixes, although there is a plural specifier ŋɪ̄ or ɡɪ̄ that can be used with rational nouns as seen in (6). Speakers consulted assess this specifier the same way as unmarked relative clauses within the noun phrase: acceptable, but not used much. However, Ama also has a post-verbal quantifier ɡàɪ̀ that can be used when there is a plural participant in the clause, as shown in (7).14

(6)

(7)

We will return to this tendency to express relative clauses and number late in the clause after considering other evidence from verb stems.

Ama Verb Stems

Stevenson discovered the existence of two stems of each Ama verb.15 The forms of the two stems are not fully predictable from each other in general, and their usage depends on aspect.

The Factative–Progressive Distinction

The aspectual functions of the two stems were described by Stevenson as definite and indefinite aspect, and relabeled as perfective and imperfective by more recent authors. However, the usage of the former stem meets the definition of “factative,”16 such that it has a past perfective reading when used for an active verb like “eat,” but a present continuous reading when used for a stative verb like “know.” The other stem has a present progressive reading, which is marginal for stative verbs where the meaning contribution of progressive to an already continuous verb is highly marked.17 The factative–progressive analysis is helpful when we consider the history of these stems below.

active verbstative verb
factative aspectt̪àl “ate” (past perfective)t̪ʊ̄-máɪ́ “know” (present continuous)
progressive aspecttām “is eating”?máɪ́ “is knowing”

Table 4. Verb stems of active and stative verbs

Stem Formation and the Verb Root

Although factative aspect is broader in meaning and more heavily used in text, the progressive stem is generally more basic in form, often consisting only of the bare root. However, neither the factative stem nor the progressive stem is predictable from the other in general because: (i) factative stems belong to various theme vowel classes, and some belong to a class taking a formative prefix t̪V-; (ii) in some verbs the two stems have two different suppletive roots; and (iii) the progressive stems of some verbs require certain obligatory incorporated affixes. When the root is extracted from any additional formatives, CVC is the most frequent verb root shape.

factativeprogressiveglossmorphology other than factative theme vowel
sāŋ-ɔ̄sāŋsearch
kɪ̄r-ɛ̄kɪ̄rcut
wāɡ-āwʊ̄ɔ̄keepsuppletive roots
t̪ī-ə̀túŋsleepsuppletive roots
t̪áw-ɔ̄ɡēd̪-ìcooksuppletive roots, final -i required after
ɟɛ́ɡ-ɛ̄ɟēɡ-īnleave s.th.applicative -(ī)n
á-bɪ̄ɽ-ɪ̄ŋ-ɔ̄á-bɪ̄ɽ-ɪ̄ŋinventcausative á- and inchoative -ɪ̄ŋ
t̪ī-ŋīl-ēŋɪ̄llaughfactative t̪V-
t̪ū-mūs-òmús-èɡrunfactative t̪V- ~ directional -èɡ
t̪ɪ́-ɡɛ̄l-ɛ̄á-ɡɛ̄lwashcausative-factative t̪V́- ~ causative á-
ɕɪ̀-ɛ̄á-ɕɪ̄do (intr.)causative á-

Table 5. Examples of verb stems

The CVC shape of verb roots is characteristic across Eastern Sudanic languages. In Gaahmg, for example, at least 90% of verb roots are CVC, whereas nouns are much more varied in shape.18 CVC is also the predominant shape in the following comparative data for verbs across Northern branches of Eastern Sudanic.19

glossNubianNaraTamanNyimaProto-NES
be*-a(n)/*-a-ɡVne-/ge- (pl.)*an-/*aɡ-*nV*(a)n/*(a)ɡ (pl.)
burn*urrkál, war*wer*wul “boil”*wul [*wel?]
buy*jaantol ~ dol-*tar*tol
come*taartil*or, pf. *kun*t̪ar/*kud̪*tar, [*kud?]
cut*merked*kid- (Ama /kɪr)*kɛd
dance*baanbàl, bàr-*bal/fal*bal
drink*niil-, líí-*li- (Ama /li)*li
eat*kalkal*ŋan*t̪al/*tam*kal/*kamb (pl.)
give*tir (2/3), *deen (1)nin*ti(n)*t̪Vɡ, *t̪ɔ́ŋ (1)*te(n) [final C?], *den
look*ɡuuɲ-*ɡun, pf. *ɡud*t̪iɡol*guɲ [final C?]
love, want*doll, *oonsol- (Tama tar)- (Ama /war)*tor
sit*ti(i)g/*te(e)gdengi, daŋŋi “wait”*juk*dɔɲ*daŋ
take, carry*aar-*ar-i*-ur*ar
take, gather*dummnem- (Tama tɔ-mɔɽ)- (Ama dum-)*dɔm
take, raise*eɲhind*eɲ- (Ama ɲɔn “carry”)*meɲ ~ *ɲeɲ

Table 6. Verbs across Northern East Sudanic (NES)

T/K Morphology for Factative/Progressive

An alternation between t̪- and k- cuts into the characteristic CVC shape in one class of Ama verbs as a marker of aspect along with the theme vowel.

factative stemprogressive stemgloss
t̪-ùɡ-èk-ūɡbuild
t̪-īw-òk-íwdig
t̪-ūɕ-ēk-úɕ-ínlight (fire)

Table 7. T/K marking on Ama verbs

A longer list of examples of this alternation shown in table 8 was documented by Stevenson, Rottland, and Jakobi, albeit with a different standard of transcription; they also detected the alternation in Afitti (tosù/kosìl “suckle,” tòsù/kosìl “light fire”).20

factative stemprogressive stemgloss
tuɡɛ̀kwòbuild
tàiɔ̀kaìchop
tìwòkìùdig
tìwòkèùfall (of rain)
twɛ̀kwàirear, bring up
twèrkweàɡgrow (v.i.)
tɔwɛ̀kwɔ̀igrow (v.t.)
tuwɛlɛ̀kwɛlìguard
tuɡudòkwoɡidìmix up, tell lies
toromɔ̀kwòròmgnaw
tosokwoʃìsuck (milk, of baby)
tɔʃìɡkwɔʃìɡsuckle
tosùnkwosùnburn (v.i.)
tuʃèkwuʃìnlight fire
tɛ̀nɛ̀kɛndìrclimb
tɛnìɡkɛndɛ̀ɡmount

Table 8. More verbs with T/K marking

T and K are well-known markers of singular and plural in Nilo-Saharan languages,21 but in Ama and Afitti where there is no T/K morphology on the noun, essentially the same alternation (*t becomes dental in the Nyima branch)22 is found on the verb. It also cuts into the characteristic CVC verb root shape, implying that it is an innovation on the verb. I therefore propose that this class of verbs attests the Nyima cognate of the wider Nilo-Saharan T/K alternation. This entails a chain of events in which the T/K alternation first moved from the noun (singular/plural) to the verb (singulactional/pluractional), and then shifted in meaning from verbal number to verbal aspect (factative/progressive).

Both steps in this proposed chain are indeed plausible cross-linguistically. As to the first step, the possibility of nominal plural markers being extended to verbal pluractionals is familiar from Chadic languages, where the same formal strategies such as first-syllable reduplication or a-infixation may be found in plural nouns and pluractional verbs.23 In the Nyima languages, the productive innovation at this step appears to have been the extension of singulative T to a verbal singulactional marker. This is seen in the fact that t̪- alternates with other consonants as well as k in Ama (t̪ān-ɛ̄/wɛ̄n “talk,” t̪ɛ̀l-ɛ̄/wɛ̄ɛ́n “see,” t̪àl/tām “eat”), or is prefixed in front of the root (t̪ʊ́-wár-ɔ̄/wār “want,” t̪ī-ŋīl-ē/ŋɪ̄l “laugh,” t̪ì-fìl-è/fɪ̄l “dance,” t̪ū-mūs-ò/mús-èɡ “run,” t̪ʊ̄-máɪ́/máɪ́ “know,” t̪-īlm-ò/ɪ́lɪ́m “milk”). There is also external evidence from Nubian and Nara cited in table 6 above that *k is the original initial consonant in *kal “eat” replaced by t̪- in Ama and Afitti.

As to the second step, the prospect of verbal number shifting to verbal aspect is supported by semantic affinity between pluractional and progressive. Progressive aspect often entails that a process that is iterated (“is coughing,” “is milking”) over the interval concerned.24 In Leggbo,25 a Niger-Congo language, the progressive form can have a pluractional reading in some verbs, and conversely, verbs that fail to form the regular progressive C# → CC-i because they already end in CCi can use the pluractional suffix -azi instead to express progressive aspect. In Spanish,26 a Romance language, there is a periphrastic paradigm between progressive (estar “be” + gerund), frequentative pluractional (andar “walk” + gerund), and incremental pluractional (ir “go” + gerund). The two Spanish pluractionals have been called “pseudo-progressives,” but conversely one could think of progressive aspect as pseudo-pluractional. What is somewhat surprising in Ama is that progressive stems, being morphologically more basic (see table 5), lack any devoted progressive affixes that would have formerly served as pluractional markers.27 However, some progressive marking is found in irregular alternations that reveal former pluractional stems.

In t̪àl/tām “eat,” the final l/m alternation is unique to this item in available word lists, although l/n occurs elsewhere (kɪ́l/kín “hear,” t̪ɛ̀l-ɛ̄/wɛ̄ɛ́n “see”). The final l/m alternation is nevertheless also found in Afitti (t̪ə̀lɔ̀/tə̀m “eat”) and in Kordofan Nubian (*kol ~ kel/*kam “eat”).28 Kordofan Nubian *kam is used with a plural object, a pluractional function, so in the Nyima branch the proposed shift pluractional → progressive derives the progressive function of final m found in Ama, just as it does for the initial k in t̪/k alternations or the t in t̪àl/tām “eat.” Furthermore, a final plosive in Old Nubian (ⲕⲁⲡ-29; Nobiin kab-) suggests that the unique m in “eat” arose by assimilation of the final nasal (realized as n in the other Ama verbs mentioned) to a following *b, that was fully assimilated or incorporated in Old Nubian.

Seen in this light, the significance of moving T/K morphology onto verbs in the Nyima branch is that it renewed an existing system of irregular singulational/pluractional alternations. We then have a tangible account of where Ama’s missing noun morphology went, because formerly nominal morphology is found on the verb instead.

Concretization of Core Clause Constituents

We can also now tie together this finding with the findings on verb syntax in +www⁄§2 +. Both T/K number marking and relative clause modification have moved out of the noun phrase, and in these comparable changes we can observe a trend towards concretization of noun phrases, with number and clausal information about the participant being expressed later in the clause.

The trend towards concretization also affects the verb itself. T/K and other irregular stem alternations did not maintain their pluractional meaning, as this evolved into a more concrete construal of the predicate over an interval of time as progressive aspect. Since concretization affected the verb as well as noun phrases, it affected the entire core SOV clause, with plurality as well as relative clauses largely deferred to after the verb.

A role for concreteness in grammar was previously proposed in the Pirahã language of Brazil by Everett.30 Everett’s approach remains highly controversial,31 particularly, I believe, in its attempt to constrain grammar by culture directly in the form of a synchronic “Immediacy of Experience Constraint” on admissible sentence constructions and lexemes in Pirahã. My proposal here is deliberately less ambitious, appealing to concreteness as a diachronic trend in the Nyima branch, not as a constraint on the current synchronic grammar of Ama. Thus, Ama typically attests a separation between a concrete SOV clause and post-verbal modification, but this is not a strict division in the grammar, because it is not impossible to express number or relative clauses within the noun phrase, just infrequent. The concretization process in Ama must also have been specific enough not to have eliminated adjectives from the noun phrase. Ama has adjectives, as shown in examples (8)–(11), which occur as attributive modifiers of nouns in their unmarked form, whereas in predicates they are separated from the subject noun by a clause particle and occur as the complement of the inflecting copula verb nɛ̄. Ama adjectives include numerals and quantifiers, despite the limited role of number in the grammar.

(8)

(9)

(10)

(11)

Ama Verbal affixes

Research over the past century has also been gradually clarifying the complex morphological system of Ama verbs.32 Factative and progressive aspect are distinguished in the affix system as well as in stems, and there is an evolving portfolio of pluractional affixes.

Affix Selection and Order

Some verbal affixes are selected depending on factative or progressive aspect in Ama, just as verb stems are. For example, different suffixes for past tense or for directional movement are selected in the different aspects:

stempast
factativet̪àlt̪àl-ʊ̀n
progressivetāmtām-áʊ́

Table 9a. Affix selection according to aspect: “eat”

stemdirection
factativedɪ̀ɟ-ɛ̄dɪ̀ɟ-ɛ̄-ɡ
progressivedɪ̄ɟ-ɪ̄dīɟ-ír

Table 9b. Affix selection according to aspect: “throw”

The same is true of passive and ventive suffixes, but in factative aspect the suffixes replace the theme vowel, so that the affixes are the sole exponent of aspect in many verbs:

stempassive
factativeásɪ̄d̪āy-ɛ̄ásɪ̄d̪āy-áɪ́
progressiveásɪ̄d̪āɪ̄ásɪ̄d̪āy-àɡ

Table 10a. Affix selection as sole exponent of aspect: “paint”

stemventive
factativeɪ̄r-ɛ̄ɪ̄r-ɪ́ɪ̄ɡ
progressiveɪ̄rɪ̄r-ɪ́d̪ɛ̄ɛ̀ɡ

Table 10b. Affix selection as sole exponent of aspect: “send”

In passive and in past, affix order also varies according to aspect with respect to the dual suffix -ɛ̄n:

stemdual passive
factativeásɪ̄d̪āy-ɛ̄ásɪ̄d̪āy-áy-ɛ̄n
progressiveásɪ̄d̪āɪ̄ásɪ̄d̪āy-ɛ̄n-àɡ

Table 11a. Affix order variation according to aspect: “paint”

stemdual past
factativesāŋ-ɔ̄sāŋ-ɛ̄n-ʊ̀n
progressivesāŋsāŋ-áw-ɛ̄n

Table 11b. Affix order variation according to aspect: “search”

The origin of this affix order variation is revealed by further evidence. Passive marking comes after dual in progressive aspect, whereas past marking comes after dual in factative aspect, but the common feature of both suffixes -àɡ, -ʊ̀n placed after the dual is that they both bear low tone. Two more suffixes with low tone, directional -ɛ̀ɡ ~ -ɡ (the second allomorph is toneless) and mediocausative -àw ~ -ɔ̀ (the second allomorph is used word-finally) appear after the dual, but if another low-tone suffix is added after the dual, they appear before the dual instead. Hence, there is only one more affix slot in Ama after the penultimate dual suffix.

glossthrowthrow to (du.)elicit (du.)
factativedɪ̀ɟ-ɛ̄-ɡdɪ̀ɟ-ɪ́-n-ɪ̄ɡkɪ́l-ɛ̄n-ɔ̀
throw-th-dirthrow-ven-du-dirhear-du-medcaus
factative imperativedɪ̀ɟ-ɛ̀ɡ-ɛ̄-ɪ̀dɪ̀ɟ-ɪ́-ɡ-ɛ̄n-ɪ̀kɪ́l-àw-ɛ̄n-ɪ̀
throw-dir-th-impthrow-ven-dir-du-imphear-medcaus-du-imp
factative pastdɪ̀ɟ-ɛ̀ɡ-ɔ̄-ɔ̀ndɪ̀ɟ-ɪ́-ɡ-ɛ̄n-ʊ̀nkɪ́l-àw-ɛ̄n-ʊ̀n
throw-dir-th-pstthrow-ven-dir-du-psthear-medcaus-du-pst

Table 12. Inward displacement of suffixes by an imperative or past suffix

Both types of affix alternation in tables 11 and 12 involve low-tone suffixes in the final slot. Therefore, the development of all affix order alternations can be attributed to a single historical shift of all low-tone suffixes to the final slot. However, this shift is not realized in verbs containing two low-tone suffixes, because only one of them can go in the final slot. The only final-slot suffix that does not alternate is the imperative -ɪ̀, which leaves imperative as original to the final slot. Other suffixes originate from more internal slots to the left of the dual.

As for the origin of affix selection according to aspect, this presumably arose as an extension of the systematic stem selection that occurs for every verb in Nyima languages. This question remains complex, however, because each of the categories affected (past, passive, directional, ventive) will have its own history as to how alternating affixes were acquired in these conditions. One modest proposal is that the NES plural copula *aɡ shown earlier in table 6 is the likely source of the progressive passive suffix -àɡ in Ama,33 via the shift from pluractional to progressive ( +www⁄§3.3 +), and by a plausible assumption of a transition in passive marking strategy from use of a copula to morphological marking on the verb. This sourcing does not extend to the other passive suffix in factative aspect -áɪ́, however, which does not resemble the singular copula *an. Some similar proposals that other progressive suffixes have pluractional origins are made in the course of §4.2 below.

Pluractional Affixes

Ama has extensions that fall within the family of pluractionals that associate plurality with the verb in different ways, that has emerged as an area of study in language description in recent years.34 These extensions are particularly comparable with Nubian and other related languages.

Distributive Pluractional

Ama has a distributive suffix -ɪ́d̪ that marks incremental distribution of an event over time or over participants (àɪ̀ bā fʊ̄rā mʊ̄l t̪àl-ɪ́d̪-ɛ̀ “I ate until I had eaten five rabbits,” wùd̪ēŋ bā dɔ̄rɛ̄ŋ t̪ɛ̀l-ɪ́d̪-ɛ̄ “The child saw each of the children”).35 Called “plural” in earlier works, it is remarkable that this category was largely unaffected by the shift of pluractional → progressive analyzed in +www⁄§3.3 +above,36 indicating that we are dealing with two distinct pluractionals, a distributive pluractional and another former pluractional that is now progressive. Ama has a second distributive suffix -r used only on verbs with the theme vowel -a (wāɡ-ā “keep,” distributive wāɡ-ɪ́d̪-ā-r).37 Ama’s immediate relative Afitti has a “verbal plural” suffix -tər,38 which corresponds to Ama -ɪ́d̪ and -r combined, reminiscent of their use in that order in Ama on verbs with the theme vowel -a, but regularized to all verbs in Afitti. The Ama suffix -ɪ́d̪ also closely resembles a “plural action” suffix -(ɨ)t̪ in the nearby Eastern Sudanic language Temein,39 and a “plurality of action” suffix -íd in Midob.40 The distributive suffix -ij in Kunuz Nubian is also similar.41

Distributive pluractionals are characterized by optionality with a plural participant (distributivity implies plurality but is distinct from it),42 which distinguishes them from plural-object pluractionals found in many Nubian languages that mark, and are thus obligatory with, plural objects.43 Distributives are also characterized by non-occurrence with dual participants (to be non-trivial, distribution requires at least three targets).44 The Ama distributive has the first property of optionality in transitive (but not intransitive) verbs, and the second property of non-duality with respect to subjects (but not objects).45 This second property is shared by the Afitti suffix -t(ə)r which likewise does not occur with dual subjects.46 This is shown in Afitti field data below,47 where the suffix -t(ə)r contrasts in this respect with plural pronominal affixes 1pl ko-, 2pl o-, and 3pl -i which do occur with dual subjects.

1st persongloss2nd persongloss3rd persongloss
ɡə́-ɡaɲalI milké-ɡaɲalyou (sg.) milkkaɲálhe/she milks
kó-ɡaɲalwe (du.) milkó-ɡaɲályou (du.) milkɡaɲál-ithey (du.) milk
kó-ɡaɲa-tr̀we (pl.) milkó-ɡaɲa-tr̀you (pl.) milkɡaɲá-tər-ithey (pl.) milk

Table 13. Afitti pluractional -t(ə)r not used with dual subjects

Beyond the Nyima branch, the Temein “plural action” suffix -(ɨ)t̪ shares the first property of optionality as it “is by no means always added with plural objects.”48 It actually marks a distributive effect of the verb on the object (ŋɔŋɔt-ɨt̪-ɛ dʉk “I break the stick into pieces”), as also found with the Kunuz Nubian distributive suffix -ij (duɡuːɡ ɡull-ij-ossu ‘She threw the money here and there’).49 Information on non-occurrence with dual subjects is not reported in these languages, but it appears that this is because non-duality is a feature of incremental-distributive marking as found in Nyima, and not distributive-effect marking as found in Temein and Kunuz which can even occur with a singular object, as in the Temein example.

The confirmation of distributive markers across Nubian, Nyima, and Temein implies that a distributive pluractional was present in Eastern Sudanic from an early stage, with a form like *-id. In Nubian the consonant is palatal,50 and although palatals are a difficult area for establishing wider sound correspondences,51 the palatal arises in the plausible conditioning environment of a high front vowel.

Second Historic Pluractionals

Ama’s second distributive suffix -r corresponds to the Nubian plural object marker *-er,52 and since this suffix is much less productive in Ama, it may well have been bleached of its original meaning. In the Kordofan Nubian language Uncu, the cognate extension -er has the same function as the irregular pluractional stem (kol/)kom “eat,” as both occur with plural objects.53 Similarly in Ama, some trills shown below occur in the same category as the irregular progressive stem (t̪àl/)tām “eat,” providing evidence that the trill originally marked the second Nyima pluractional that is now progressive.

The Ama suffix -ar can be added to a progressive verb as a mirative that marks unexpected events (swāy-ɔ́ “was cultivating” → swāy-ɔ̄r-ɔ́ “was unexpectedly cultivating”, where the vowel has harmonized to the following vowel). However, this suffix is also used to disambiguate progressive verb forms from otherwise indistinguishable factatives (sāŋ-ɛ̄n/sāŋ-ɛ̄n, sāŋ-ār-ɛ̄n “search (du.)”),54 providing what looks like an alternate progressive stem to take the dual suffix. Similarly, the negative imperative construction in Ama requires a progressive stem with -ar after the negative particle fá as shown in table 14 below. Inflections occurring in this construction are a plural subject marker à- on the particle, and dual or distributive marking on the verb. Only the dual suffix can occur without -ar, where in my data the dual suffix adds to the longer stem with -ar unless the short stem is suppletive (t̪ī-ə̀/túŋ “sleep,” t̪àl/tām “eat”) and can take the dual suffix without ambiguity with factative aspect.

singulardualdistributive pluralgloss
fá kɪ̄r-ārà-fá kɪ̄r-ār-ɛ̄nà-fá kɪ̄r-ɪ́d̪-ārdon’t be cutting!
fá sāŋ-ārà-fá sāŋ-ār-ɛ̄nà-fá sāŋ-ɪ́d̪-ārdon’t be searching!
fá túŋ-ārà-fá túŋ-ɛ̄nà-fá túŋ-ɪ́d̪-ārdon’t be sleeping!
fá tām-ārà-fá tām-ɛ̄nà-fá tām-ɪ́d̪-ārdon’t be eatingǃ

Table 14. Ama negative imperative paradigms

Another trilled suffix -ir marks motion in progress.[^55] It can be added to a progressive verb (dɪ̄ɟɪ̄ “is throwing” → dīɟ-ír “is throwing (motion in progress)”), but on several motion verbs it is documented as part of the progressive stem, as in the examples in table 15 below from Stevenson, Rottland, and Jakobi.[^56] The motion meaning of -ir simply agrees with the semantics of the roots, all of which define motion along some schematic scale, so that the aspectual meaning of -ir assumes greater significance. Hence, -ir approximates a progressive stem formative for this class of verbs. The final example in table 15, due to Kingston,[^57] shows still another trilled suffix -or in the progressive stem of a caused motion verb.

[^55] I defer description of tone on this affix to another time. +[^56] Stevenson, Rottland & Jakobi, “The Verb in Nyimang and Dinik.” +[^57] This verb appears in unpublished data collected by Abi Kingston.

factativeprogressivegloss
bwìɡbuɡìrovertake
nɪfɛ̀ɡnɪfìrfall
tɛnɛ̀kɛndìrclimb
tɪjɛjeìrshoot
ánasaánasortake down

Table 15. Progressive stems ending in a trill

The trill thus fuses with certain vowels that behave like theme vowels for creating extended progressive stems. As a progressive element, the trill most probably derives from the shift of pluractional → progressive, identifying it as the missing extension of the second Nyima pluractional. We then have an Ama distributive pluractional suffix -ɪ́d̪ that resembles the Nubian distributive pluractional *-[i]ɟ, and Ama “pseudo-pluractional” progressive suffixes of the shape -Vr that resemble the Nubian plural-object pluractional *-er.

Innovative Dual-Participant Pluractional

A late addition to Ama’s pluractional portfolio is its unique dual suffix -ɛ̄n.55 The older form of the Ama dual suffix is -ɪn,56 which has been noted to resemble reciprocal suffixes in other Eastern Sudanic languages, such as Kordofan Nubian -in, Daju -din, Temein , and also Ik -in of the Kuliak group.57 In Ama, its function has evolved to dual reciprocal and other dual participant readings, so for example wʊ̀s-ɛ̄n “greet (du.)” can refer to when two people greeted each other, or someone greeted two people, or two people greeted someone.58 The dual suffix is regularly used in Ama folktales to link two primary characters.59 Although such dual participant marking is extremely rare globally, it becomes possible in Nyima languages in particular where the incremental-distributive pluractional leaves a paradigmatic gap for dual subjects, as still seen in Afitti in table 13 above, which Ama has filled in.

Conclusion: Ama as a Matured North Eastern Sudanic Language

Ama verbs show a number of connections to Nubian and other Eastern Sudanic languages in their clause-final syntax, CVC root shape, and certain affixes. However, these connections are more in form than meaning, as the semantics is highly innovative in such notable shifts as plural → pluractional → progressive and reciprocal → dual, and in the drive towards concretization that has moved the expression of both relative clauses and number out of noun phrases to after the verb. In addition, the movement of low-tone suffixes to the final suffix slot, while itself a formal development, has further advanced the morphologization of aspect, so that stem selection, affix selection, and affix order all vary with aspect in Ama verbs. Next to these considerable changes, Ama’s stable distributive pluractional stands out as indicative of a wider Eastern Sudanic verbal category.

An explanation for the innovations found in Ama will not be found in influence from other languages of Sudan, because several of its innovations are extremely rare (adjoined relative clauses, dual verbal number, tone-driven affix order alternation). Instead of an influx of new forms, we have unusual internal evolution of existing forms, implying relative isolation. Ama then exemplifies what both Dahl and Trudgill call “mature phenomena,”60 found in languages of isolated small communities where the language has time to evolve based on an abundance of specific shared information in a closed society of intimates. Languages spoken by isolated societies of intimates are more likely to conventionalize complex morphological paradigms, unusual categories, and unusual syntax (maturation), whereas larger, multilingual social networks encourage simpler grammars in the sense of smaller paradigms, and pragmatically well-motivated categories and syntax that are found widely in language (pidginization). Aforementioned verbal features in Ama of dual number, irregular allomorphy (in suppletive roots and in the use of a second distributive suffix), fusion (in affixes like passive and ventive that mark aspect as well), polyfunctionality (of the progressive suffix -ar for mirativity or long stem formation), and multiple exponence (of aspect by stem selection, affix selection, and affix order), plus the unusual syntax of adjoined relative clauses, all look like mature language phenomena.61

Ama nominals, similarly, are known for their relatively rich case systems, but similar case paradigms are found in Nubian and other Northern East Sudanic languages, implying that the case system largely matured at an earlier stage and the resulting complexity is retained in all these languages. Thus, it is the verb system rather than the nominal system that provides evidence of maturation in the Nyima branch in particular.

The conclusion that Ama verbs (and post-verbal syntax) have matured as a result of Nyima’s isolated position, away from the river systems that hosted speakers of other languages in the Sudan region in the past, faces the possible difficulty that contacts have in fact been proposed between Nyima and other Nuba Mountain groups. Thus, it is proposed that the Niger-Congo Nuba Mountain group Heiban borrowed accusative marking and basic vocabulary from Nyima.62 Such contact would have put a brake on maturation in Nyima, because the use of proto-Nyima for inter-group communication between first-language Nyima users and second-language Heiban users would not have supported further growth in complexity.63 However, it is not realistic that such contacts lasted for a large proportion of Nyima history, but rather were fairly temporary periods punctuating Nyima’s longer isolation. Thus, the Heiban group has now developed separately in the eastern Nuba Mountains for something approaching two millennia (given the internal diversity of the ten Heiban languages found there) since its contact with Nyima.

Some time after the contact with Heiban, Rottland and Jakobi note the likelihood of contact of Kordofan Nubian with Ama and Afitti in the north-west Nuba Mountains before the arrival of Arabic as a lingua franca in the Nuba Mountains.64 Ama and Afitti are more lexically divergent than Kordofan Nubian and therefore were probably already separate communities when the Kordofan Nubians arrived. However, the innovation of dual marking on Ama verbs in the period after separation from Afitti still shows the hallmarks of maturation. It adds an extremely rare category, increases the occurrence of morphologically complex verbs by using a verbal marker in dual participant contexts that were not previously marked, and adds redundancy when agreeing with noun phrases containing two referents. This mature feature of Ama again suggests that any language contact with Kordofan Nubian occurred for only part of the time since Ama separated from Afitti.

This period nevertheless also reveals one significant example of simplification in Ama verbs that supports the idea that language contact occurred. Afitti has pronominal subject markers on the verb, seen earlier in table 13, which are absent in Ama. The pronominal prefixes are not the same in form as personal pronoun words in Afitti (1sg oi but 1sg prefix kə-),65 therefore they are not incorporated versions of the current pronoun words, but rather predate them. Some of the Afitti pronoun words (1sg oi, 2sg i)66 are similar to Ama (1sg àɪ̀, 2sg ) and must be retentions from proto-Nyima, hence the older pronominal prefixes must also be retentions in Afitti, but lost in Ama. Their loss in Ama is remarkable against the larger trend of growth in complexity of Ama verbs that we have examined in this paper. The predicted cause of this surprising reversal is pidginization under contact. That is, their loss is evidence that the Ama language was used for inter-group communication, presumably with the Kordofan Nubians, during which (and for which) Ama SOV sentences were simplified by dropping verbal subject marking. If Kordofan Nubians spoke Ama, then borrowing from Ama into Kordofan Nubian is also likely. In verbs, the obvious candidate for borrowing into Kordofan Nubian is the reciprocal suffix -in, as this is not attested elsewhere in Nubian.67 The following two-step scenario would then account for the facts: Ama was learned and used by Kordofan Nubians, during which Ama dropped verbal subject marking and its reciprocal suffix was borrowed into Kordofan Nubian; next, Ama returned to isolation in which the reciprocal suffix developed its dual function that is unique to Ama today.


  1. Stevenson, Grammar of the Nyimang Language and “A survey of the phonetics and grammatical structure of the Nuba Mountain languages with particular reference to Otoro, Katcha and Nyimaŋ,” 40: p. 107. ↩︎

  2. Rilly, Le méroïtique et sa famille linguistique, §4. ↩︎

  3. de Voogt, “A Sketch of Afitti Phonology,” p. 47. ↩︎

  4. Heine & Vossen, “Sprachtypologie,” cited in Kröger, “Typology Put to Practical Use,” p. 159. ↩︎

  5. Norton, “Number in Ama verbs,” pp. 75⁠–⁠76, 85; Stevenson, “A Survey of the Phonetics and Grammatical Structure of the Nuba Mountain Languages,” 41: pp. 175⁠–1⁠76. ↩︎

  6. Stevenson, Grammar of the Nyimang Language, §§2–⁠10. ↩︎

  7. Dimmendaal, “Africa’s Verb-final Languages,” §9.2.3. ↩︎

  8. Dimmendaal, “Introduction” to Coding Participant Marking, pp. 6–7. ↩︎

  9. Stevenson, “A Survey of the Phonetics and Grammatical Structure of the Nuba Mountain Languages,” 41: p. 174. ↩︎

  10. Waag, The Fur Verb and Its Context, p. 49; low tone is unmarked in the Fur two-tone system. ↩︎

  11. Jakobi, Kordofan Nubian, p. 159. Her data from Kordofan Nubian varieties shows high tone. ↩︎

  12. Stevenson, Grammar of the Nyimang Language, p. 178, shows cleft constructions with a similar core+adjoined structure, wadang nɔ a nɛ [a meo tolun] “This is the man [I saw yesterday].” ↩︎

  13. Glossing abbreviations: 1, 2, 3 – 1st, 2nd, 3rd person; acc – accusative; decl – declarative; dir – directional; distr – distributive; du – dual; ev – event; fact – factative; gen – genitive; imp – imperative; loc – locative; med – mediopassive; medcaus – mediocausative; pass – passive; pct – punctual; pl – plural; prog – progressive; pst – past; ptcp – participle; sg – singular; th – theme; top – topic; tr. – transitive; ven – ventive; ver – veridical. ↩︎

  14. Stevenson, Grammar of the Nyimang Language, p. 176, claims that “GAI gives the idea of completion, going on till an act is finished,” although all his examples involve a plural subject “they” His claim suggests that this quantifier may have a collective function, over all participants and/or over all the stages in the completion of the event. It can nevertheless appear in the same clause as distributive marking -ɪ́d̪, as in an example shown in Norton, “Number in Ama verbs,” p. 83, wùd̪ēŋ bā dɔ̄rɛ̄ŋ t̪ɛ̀l-ɪ́d̪-ɛ̄ ɡàɪ̀ “the child saw each of the children [until she had seen them all].” ↩︎

  15. Stevenson, “A Survey of the Phonetics and Grammatical Structure of the Nuba Mountain Languages,” 41: p. 177. ↩︎

  16. Welmers, African Language Structures, pp. 346, 348. ↩︎

  17. Compare Mufwene, “Stativity and the Progressive,” where it is argued that progressive is a stativizing category in a number of European and Bantu languages, although progressive verb forms typically have a more transient interpretation, and lexical statives a more permanent interpretation. ↩︎

  18. Stirtz, A Grammar of Gaahmg, p. 40. ↩︎

  19. Rilly, Le méroïtique et sa famille linguistique, annex. ↩︎

  20. Stevenson, Rottland & Jakobi, “The Verb in Nyimang and Dinik,” p. 16. By convention, t is dental and mid tone is left unmarked in their data. Pertinent to the present alternation, I question the phonemic status of the w in t/kw alternations before rounded vowels. ↩︎

  21. Greenberg, The Languages of Africa, pp. 115, 132; Bryan, “The T/K Languages”; Gilley, “Katcha Noun Morphology,” §2.5, §3, §4. ↩︎

  22. Rilly, Le méroïtique et sa famille linguistique, p. 299. ↩︎

  23. Frajzyngier, “The Plural in Chadic”; Wolff, “Patterns in Chadic (and Afroasiatic?) Verb Base Formations.” ↩︎

  24. Newman, “Pluractional Verbs” notes a separate affinity between pluractional and habitual aspect found in Niger-Congo and Chadic languages. Smits, A Grammar of Lumun, §13, identifies habitual pluractionals in a Niger-Congo language of the Nuba Mountains. ↩︎

  25. Hyman & Udoh, “Progressive Formation in Leggbo.” ↩︎

  26. Laca, “Progressives, Pluractionals and the Domains of Aspect.” ↩︎

  27. See, however, §4.2 below which purports to recover the missing extension. ↩︎

  28. Rilly, Le méroïtique et sa famille linguistique, p. 478. ↩︎

  29. Ibid; Old Nubian also attests the lateral in a hapax form κⲁⲗ-. ↩︎

  30. Everett, “Cultural Constraints on Grammar and Cognition in Pirahã.” ↩︎

  31. Nevins, Pesetsky & Rodrigues, “Pirahã Exceptionality”; Everett, “Pirahã Culture and Grammar.” ↩︎

  32. Stevenson, Grammar of the Nyimang Language, §XI; Stevenson, “A Survey of the Phonetics and Grammatical Structure of the Nuba Mountain Languages,” 41: pp. 171-183; Stevenson, Rottland & Jakobi, “The Verb in Nyimang and Dinik"; Norton, “Number in Ama Verbs”; Norton, “The Ama Dual Suffix"; Norton, “Classifying the Non-Eastern-Sudanic Nuba Mountain Languages.” ↩︎

  33. The Tama plural copula àɡ is likewise listed with low tone in Rilly, Le méroïtique et sa famille linguistique, p. 451. ↩︎

  34. Newman, “Pluractional Verbs.” ↩︎

  35. Norton, “Number in Ama Verbs,” pp. 77, 83. ↩︎

  36. I say the distributive is “largely” unaffected by the shift from pluractional to progressive because a dental plosive appears to have been co-opted in the progressive ventive suffix, as in dɪ̀ɟ-ɪ́-n-ɪ̄ɡ/dɪ̀ɟ-ɪ́d̪-ɛ̄n-ɛ̀ɡ (throw-ven-du-dir) “threw to”/“is throwing to” as the dental plosive is the only difference with the factative ventive suffix -ɪ́. ↩︎

  37. Norton, “Number in Ama Verbs,” p. 81. ↩︎

  38. de Voogt, “Dual Marking and Kinship Terms in Afitti,” p. 903, which also shows a similar plural object suffix -to. ↩︎

  39. Stevenson, “A Survey of the Phonetics and Grammatical Structure of the Nuba Mountain Languages,” 41: p. 187, where ɨ is used in the same way as contemporary ɪ. Tone was not recorded. ↩︎

  40. Werner, Tìdn-áal, p. 52. ↩︎

  41. Abdel-Hafiz, A Reference Grammar of Kunuz Nubian, p. 117. Tone was not recorded. ↩︎

  42. Corbett, Number, p. 116. ↩︎

  43. article⁄Jakobi, this issue ↩︎

  44. Corbett, Number, pp. 115-116. ↩︎

  45. Norton, “Number in Ama vVrbs,” pp. 78, 79, 91. ↩︎

  46. de Voogt, “Dual Marking and Kinship Terms in Afitti,” p. 903. ↩︎

  47. I am grateful to Alex de Voogt for sharing this data in personal communication from his field research on Afitti. ↩︎

  48. Stevenson, “A Survey of the Phonetics and Grammatical Structure of the Nuba Mountain Languages,” 41: p. ↩︎

  49. Abdel-Hafiz, A Reference Grammar of Kunuz Nubian, p. 118. ↩︎

  50. article⁄Jakobi, this issue Jakobi points that the other very similar suffix -íd in Midob cannot be reconstructed to proto-Nubian from just one Nubian language, so appears to be an innovation, and her observation of its similarity to the Ama suffix clearly suggests borrowing into Midob from Ama’s ancestor or another related language. Hence, the reconstructable pluractional *[i]ɟ is more viable as the historic cognate of the Ama suffix. ↩︎

  51. Rilly, <em>Le méroïtique et sa famille linguistique,</em>⦚bib:e70fd04a-b57d-4d00-9051-ab1f3473334dnot found pp. 303-304. ↩︎

  52. article⁄Jakobi, this issue ↩︎

  53. Comfort, “Verbal Number in the Uncu Language.” ↩︎

  54. Norton, “Number in Ama Verbs,” p. 40. ↩︎

  55. Norton, “Number in Ama Verbs,” §3. ↩︎

  56. Stevenson, Rottland & Jakobi, “The Verb in Nyimang and Dinik,” p. 28. ↩︎

  57. Norton, “The Ama Dual Suffix,” p. 121. ↩︎

  58. Ibid., p. 120. ↩︎

  59. Norton, “Number in Ama Verbs,” pp. 84, 87. ↩︎

  60. Dahl, The Growth and Maintenance of Linguistic Complexity; Trudgill, Sociolinguistic Typology. ↩︎

  61. Maturity could also describe further properties of Ama verbs whose description is in preparation by the author, including further instances of allomorphy, fusion, polyfunctionality, and several kinds of tonal morphology. ↩︎

  62. Norton, “Classifying the Non-Eastern-Sudanic Nuba Mountain Languages.” ↩︎

  63. Stevenson, “A Survey of the Phonetics and Grammatical Structure of the Nuba Mountain Languages,” 41: p. 175, notes the similarity of Ama’s nominal plural ŋi to a similar plural clitic ŋi [sic] in Heiban, which here might be interpreted as a pidginization effect in which the universally well-motivated category of nominal plurality was renewed in Nyima during inter-group communication after the earlier loss of number affixes. However, Stevenson is unusually in error in this passage as the Heiban form is actually -ŋa as he himself documented (ibid, p. 28). Subsequent lowering to a in Heiban cannot be ruled out (he notes Heiban’s relative Talodi has ɛ here), but it is also quite possible that ŋi was sourced internally, as the high front vowel is also the common element in the plural pronouns ə̀ŋí/ɲí/ə̀ní 1pl/2pl/3pl). ↩︎

  64. Rottland & Jakobi, “Loan Word Evidence from the Nuba Mountains.” ↩︎

  65. Stevenson, Rottland & Jakobi, “The Verb in Nyimang and Dinik,” pp. 34-38. ↩︎

  66. Stevenson, “A Survey of the Phonetics and Grammatical Structure of the Nuba Mountain Languages,” 41: p. 177. ↩︎

  67. article⁄Jakobi, this issue ↩︎

\ No newline at end of file diff --git a/public/article/rilly/index.html b/public/article/rilly/index.html new file mode 100644 index 0000000..1c8ecb5 --- /dev/null +++ b/public/article/rilly/index.html @@ -0,0 +1,5 @@ +Personal Markers in Meroitic - Dotawo Journal

Personal Markers in Meroitic

article⁄Personal Markers in Meroitic
in issues⁄
\ No newline at end of file diff --git a/public/article/russell/index.html b/public/article/russell/index.html deleted file mode 100644 index d0c64fb..0000000 --- a/public/article/russell/index.html +++ /dev/null @@ -1,7 +0,0 @@ -Ama Verbs in Comparative Perspective - Dotawo Journal

Ama Verbs in Comparative Perspective

article⁄Ama Verbs in Comparative Perspective
in issues⁄
Abstract

Ama verbs are comparable with Nubian and other related languages in their clause-final syntax, CVC root shape, and some affixes. However, there is also considerable innovation in adjoined relative clauses, a shift from number to aspect marking traced by T/K morphology, and other changes in the order and meaning of affixes. These developments show a unique trend of concretization of core clause constituents, and internal growth in the complexity of verbs in isolation from other languages. On the other hand, Ama’s stable distributive pluractional represents a wider Eastern Sudanic category. The late loss of pronominal subject marking supports a hypothesis that the Ama language was used for inter-group communication with Kordofan Nubians.

\ No newline at end of file diff --git a/public/article/starostin/index.html b/public/article/starostin/index.html new file mode 100644 index 0000000..4b01938 --- /dev/null +++ b/public/article/starostin/index.html @@ -0,0 +1,44 @@ +Restoring “Nile-Nubian”: How To Balance Lexicostatistics and Etymology in Historical Research on Nubian Languages - Dotawo Journal

Restoring “Nile-Nubian”: How To Balance Lexicostatistics and Etymology in Historical Research on Nubian Languages

article⁄Restoring “Nile-Nubian”: How To Balance Lexicostatistics and Etymology in Historical Research on Nubian Languages
in issues⁄

Introduction

Although there has never been any serious disagreement on which languages constitute the Nubian family, its internal classification has been continuously refined and revised, due to such factors as the overall complexity of the processes of linguistic divergence and convergence in the “Sudanic” area of Africa; constant influx of new data that forces scholars to reevaluate former assumptions; and lack of scholarly agreement on what types of data provide the best arguments for language classification.

Traditionally, four main units have been recognized within Nubian1:

This is, for instance, the default classification model adopted in Joseph Greenbergʼs general classification of the languages of Africa,2 and for a long time it was accepted in almost every piece of research on the history of Nubian languages.

More recently, however, an important and challenging hypothesis on a re-classification of Nubian has been advanced by Marianne Bechhaus-Gerst.3 Having conducted a detailed lexicostatistical study of a representative batch of Nubian lects, she made the important observation that, while the percentage of common matches between the two main components of Nile-Nubian is indeed very high (70%), Kenuzi-Dongolawi consistently shows a much higher percentage in common with the other three branches of Nubian than Nobiin (table 1).

MeidobBirgidKadaruDebriDillingK/D
K/D54%48%58%57%58%
Nobiin40%37%43%41%43%70%

Table 1. Part of the lexicostatistical matrix for Nubian4

In Bechhaus-Gerstʼs view, such a discrepancy could only be interpreted as evidence of Kenuzi-Dongolawi and Nobiin not sharing an intermediate common “Nile-Nubian” ancestor (if they did share one, its modern descendants should be expected to have more or less the same percentages of matches with the other Nubian subgroups). Instead, she proposed independent lines of development for the two dialect clusters, positioning Nobiin as not just a separate branch of Nubian, but actually the earliest segregating branch of Nubian. Consequently, in her standard historical scenario described at length in two monographs, there was not one, but two separate migrations into the Nile Valley from the original Nubian homeland (somewhere in South Kordofan/Darfur) — one approximately around 1,500 BCE (the ancestors of modern Nobiin-speaking people), and one around the beginning of the Common Era (speakers of Kenuzi-Dongolawi). As for the multiple exclusive similarities between Nobiin and Kenuzi-Dongolawi, these were explained away as results of “intensive language contact."5 The lexicostatistical evidence was further supported by the analysis of certain phonetic and grammatical peculiarities of Nobiin that separate it from Kenuzi-Dongolawi; however, as of today it is the lexical specificity of Nobiin that remains at the core of the argument.

Bechhaus-Gerstʼs classificatory model, with its important implications not only for the history of Nubian peoples, but also for the theoretical and methodological development of historical and areal linguistics in general, remains somewhat controversial. While it has been embraced in the recent editions of such influential online language catalogs as +www⁄Ethnologue +and +www⁄Glottolog +and is often quoted as an important example of convergent linguistic processes in Africa,6 specialists in the field often remain undecided,7 and it is concluded in the most recent handbook on African linguistics that “the internal classification of Nubian remains unclear."8 One of the most vocal opponents of the new model is Claude Rilly, whose research on the reconstruction of Proto-Nubian (in conjunction with his work on the historical relations and genetic affiliation of Meroitic) and investigation into Bechhaus-Gerstʼs evidence has led him to an even stronger endorsement of the Nile-Nubian hypothesis than ever before.9

While in theory there is nothing impossible about the historical scenario suggested by Bechhaus-Gerst, in practice the idea that language A, rather distantly related to language B, could undergo a serious convergent development over an approximately 1,000-year long period (from the supposed migration of Kenuzi–Dongolawi into the Nile Valley and up to the attestation of the first texts in Old Nubian, which already share most of the important features of modern Nobiin), to the point where language A can easily be misclassified even by specialists as belonging to the same group as language B, seems rather far-fetched. At the very least, it would seem to make perfect sense, before adopting it wholeheartedly, to look for alternate solutions that might yield a more satisfactory explanation to the odd deviations found in the data.

Let us look again more closely (table 2) at the lexicostatistical evidence, reducing it, for the sake of simple clarity, to percentages of matches observed in a “triangle” consisting of Kenuzi–Dongolawi, Nobiin, and one other Nubian language that is universally recognized as belonging to a very distinct and specific subbranch of the family — Midob. Comparative data are given from the older study by Bechhaus-Gerst and ,my own, more recent examination of the basic lexicon evidence.10

NobiinMidob
K/D70%54%
Nobiin40%

Table 2a. Lexicostatistical relations between Nile-Nubian and Midob (Bechhaus-Gerst)11

NobiinMidob
K/D66%57%
Nobiin51%

Table 2b. Lexicostatistical relations between Nile-Nubian and Midob (Starostin)12

The significant differences in figures between two instances of lexicostatistical calculations are explained by a number of factors (slightly divergent Swadesh-type lists; different etymologizations of several items on the list; exclusion of transparent recent loans from Arabic in Starostinʼs model). Nevertheless, the obvious problem does not go away in the second model: Midob clearly shares a significantly larger number of cognates with K/D than with Nobiin — a fact that directly contradicts the K/D–Nobiin proximity on the Nubian phylogenetic tree.13 The situation remains the same if we substitute Midob with any other non-Nile-Nubian language, such as Birgid or any of the multiple Hill Nubian idioms.

The important thing is that there are actually two possible reasons for this discrepancy in the lexicostatistical matrix. One, endorsed by Bechhaus-Gerst, is that the K/D–Nobiin number is incorrectly increased by the addition of a large number of items that have not been inherited from a common ancestor, but actually borrowed from Nobiin into K/D. An alternate scenario, however, is that the active recipient was Nobiin, except that the donor was not K/D — rather, a certain percentage of Nobiin basic lexicon could have been borrowed from a third, possibly unidentified source, over a relatively short period of time, which resulted in lowering the percentage of Nobiin matches with all other Nubian languages.

Thus, for instance, if we assume (or, better still, somehow manage to prove) that Nobiin borrowed 6% of the Swadesh wordlist (i.e., 6 words on the 100-item list) from this third source, exclusion of these words from lexicostatistical calculation would generally normalize the matrix, increasing the overall percentage for the K/D–Nobiin and Nobiin–Midob pairs, but not for the K/D–Midob pair.

The tricky part in investigating this situation is determining the status of those Nobiin words on the Swadesh list that it does not share with K/D. If the phylogenetic structure of the entire Nubian group is such that Nobiin represents the very first branch to be split off from the main body of the tree, as in Bechhaus-Gerstʼs model (fig. 1), then we would expect a certain portion of the Swadesh wordlist in Nobiin to be represented by the following two groups of words:

The revised classification of Nubian according to Bechhaus-Gerst

Fig. 1. The revised classification of Nubian according to Bechhaus-Gerst

Indeed, we have a large share of Nobiin basic words that set it apart from every other Nubian languages (see the more than 30 items in +www⁄section III +of the list below), but how can we distinguish retentions from innovations? If the word in question has no etymological cognates in any other Nubian language, then in most cases such a distinction is impossible.14 However, if the retention or innovation in question was not accompanied by the total elimination of the root morpheme, but rather involved a semantic shift, then investigating the situation from an etymo­logical point of view may shed some significant light on the matter. In general, the more lexico­statistical discrepancies we find between Nobiin and the rest of Nubian where the Nobiin item has a Common Nubian etymology, the better the case for the “early separation of Nobiin” hypothesis; the more “strange” words we find in Nobiin whose etymological parallels in the other Nubian languages are highly questionable or non-existent, the stronger the case for the “pre-Nobiin substrate” hypothesis.

In order to resolve this issue, below I offer a concise and slightly condensed etymological analysis of the entire 100-item Swadesh wordlist for modern Nobiin.15 The lexical items are classified into three groups:

100-Item Swadesh List for Nubian: The Data

I. Nobiin/Kenuzi-Dongolawi Isoglosses

I.1. General Nubian Isoglosses

I.2. Exclusive Nile-Nubian Isoglosses

II. Nobiin / Non-K/D Isoglosses

II.1. Potential K/D innovations

II.2. Potential Synonymy in the Protolanguage

III. Nobiin-exclusive Items

III.1. Nobiin-exclusive Items with a Nubian Etymology

III.2. Nobiin-exclusive Items without a Nubian Etymology

III.3. Nobiin-exclusive Recent Borrowings

Analysis of the Data

Based on the presented data and the etymological discussion accompanying (or not accompanying) individual pieces of it, the following observations can be made:

  1. Altogether, +www⁄section III.2 +contains 20 items that are not only lexicostatistically unique for Nobiin, but also do not appear to have any etymological cognates whatsoever in any other Nubian languages. This observation is certainly not conclusive, since it cannot be guaranteed that some of these parallels were missed in the process of analysis of existing dictionaries and wordlists, or that more extensive lexicographical research on such languages as Midob or Hill Nubian in the future will not turn out additional parallels. At present, however, it is an objective fact that the percentage of such words in the Nobiin basic lexicon significantly exceeds the corresponding percentages for any other Nubian language (even Midob, which, according to general consensus, is one of the most highly divergent branches of Nubian). Most of these words are attested already in ON, which is hardly surprising, since the majority of recent borrowings into Nobiin have been from Arabic and are quite transparent as to their origin (see +www⁄section III.3 +).
  2. Analysis of +www⁄section III.1 +shows that in the majority of cases where the solitary lexicostatistical item in Nobiin does have a Common Nubian etymology, semantic comparison speaks strongly in favor of innovation, i.e. semantic shift in Nobiin: “blood” ← “fat,” “hear” ← “ear,” “meat” ← “inside,” “say” ← “tell,” “swim” ← “be on the surface,” “tree” ← “jujube"; a few of these cases may be debatable, but the overall tendency is clear. This observation in itself does not contradict the possibility of early separation of Nobiin, but the near-total lack of words that could be identified as reflexes of Proto-Nubian Swadesh equivalents of the respective meanings in this particular group clearly speaks against this historical scenario.
  3. It is worth mentioning that the number of isoglosses that Nobiin shares with other branches of Nubian to the exclusion of K/D ( +www⁄section II.1 +) is extremely small, especially when compared to the number of exclusive Nile-Nubian isoglosses between Nobiin and K/D. However, this observation neither contradicts nor supports the early separation hypothesis (since we are not assuming that Nobiin should be grouped together with B, M, or Hill Nubian).

Conclusions

Based on this brief analysis, I suggest that rejection of the Nile-Nubian hypothesis in favor of an alternative historical scenario as proposed by Bechhaus-Gerst is not recommendable, since it runs into no less than two independent historical oddities/anomalies:

  1. assumption of a huge number of basic lexical borrowings from Kenuzi–Dongolawi into Nobiin (even including such elements as demonstrative and interrogative pronouns, typically resistant to borrowing);
  2. assumption of total loss of numerous Proto-Nubian basic lexical roots in all branches of Nubian except for Nobiin (19–21 possible items in +www⁄section III.2 +). Such conservatism would be highly suspicious; it is also directly contradicted by a few examples such as “water” (q.v.) which clearly indicate that Nobiin is innovative rather than conservative.

By contrast, the scenario that retains Nobiin within Nile-Nubian, but postulates the existence of a “pre-Nobiin” substrate or adstrate only assumes one historical oddity, similar to (1) above — the (presumably rapid) replacement of a large chunk of the Nobiin basic lexicon by words borrowed from an unknown substrate. However, it must be noted that the majority of words in +www⁄section III.2 +are nouns, rather than verbs or pronouns, and this makes the idea of massive borrowing more plausible than in the case of presumed borrowings from K/D into Nobiin.31

This conclusion is in complete agreement with the tentative identification of a “pre-Nile- Nubian substrate” in Nobiin by Claude Rilly,32 who, based on a general distributional analysis of Nubian lexicon, claims to identify no fewer than 51 Nobiin lexical items derived from that substrate, most of them belonging to the sphere of basic lexicon. It remains to be ascertained if all of Rillyʼs 51 items are truly unique in Nobiin (as I have already mentioned above, some of these Nobiin isolates might eventually turn out to be retentions from Proto-Nubian if future data on Hill Nubian and Midob happens to contain etymological parallels), but the fact that Rilly and the author of this paper arrived at the same conclusion independently of each other by means of somewhat different methods looks reassuring.

If the Nile-Nubian branch is to be reinstated, and the specific features of Nobiin are to be explained by the influence of a substrate that did not affect its closest relative (K/D), this leaves us with two issues to be resolved — (a) chronology (and geography) of linguistic events, and (b) the genetic affiliation of the “pre-Nile-Nubian substrate” in question.

The aspect of chronology has previously been discussed in glottochronological terms.33 In both of these sources the application of the glottochronological method as introduced by Morris Swadesh and later recalibrated by Sergei Starostin allowed to generate the following classification and datings (fig. 2):

Phylogenetic tree for the Nubian languages

Fig. 2. Phylogenetic tree for the Nubian languages with glottochronological datings (generated by the StarlingNJ method34

If we take the glottochronological figures at face value, they imply the original separation of Proto-Nile-Nubian around three – three and a half thousand years ago, and then a further split between the ancestors of modern Nobiin and K/D around two to two and a half thousand years ago. Interestingly enough, these events are chronologically correlatable with the two main events in the history of Nile-Nubian languages according to Bechhaus-Gerst, but not quite in the way that she envisions it: her “early separation of Nobiin” becomes the early separation of Nobiin and K/D, and her “later separation of K/D” becomes “final split between Nobiin and K/D.” The interaction between Nobiin and the mysterious “pre-Nile-Nubian substrate” must have therefore taken place some time in the 1st millennium CE (after the split with K/D but prior to the appearance of the first written texts in Old Nubian). Nevertheless, at this point I would like to refrain from making any definitive conclusions on probable dates and migration routes, given the possibility of alternate glottochronological models.

The other issue — linguistic identification of the “pre-Nile-Nubian substrate” — is even more interesting, since its importance goes far beyond Nubian history, and its successful resolution may have direct implications for the reconstruction of the linguistic history of Africa in general. Unfortunately, at this moment one can only speculate about what that substrate might have been, or even about whether it is reasonable to speak about a single substrate or a variety of idioms that may have influenced the early independent development of Nobiin.

Thus, Rilly, having analyzed lexical (sound + meaning) similarities between his 51 “pre-Nile-Nubian substrate” elements and other languages spoken in the region today or in antiquity, reached the conclusion that the substrate in question may have contained two layers: one related to ancient Meroitic, and still another one coming from the same Northern branch of Eastern Sudanic languages to which Nubian itself is claimed to belong.35 An interesting example of the former would be, e.g., the resemblance between ON mašal “sun” and Meroitic ms “sun, sun god,” while the latter may be illustrated with the example of Nobiin šìgír-tí “hair” = Tama sìgít id. However, few of Rillyʼs other parallels are equally convincing — most of them are characterized by either significant phonetic (e.g. Nobiin súː vs. Nara sàː “milk") or semantic (e.g., Nobiin nóːg “house” vs. Nara lòg “earth") discrepancies, not something one would really expect from contact relations that only took place no earlier than two thousand years ago. Subsequent research has not managed to alleviate that problem: cf., e.g., the attempt to derive Nobiin nùlù “white” from proto-Northeast Sudanic *ŋesil “tooth,”36 unconvincing due to multiple phonetic and semantic issues at the same time.

In Jazyki Afriki, an alternate hypothesis was put forward, expanding upon an earlier observation by Robin Thelwall,37 who, while conducting his own lexicostatistical comparison of Nubian languages with other potential branches of East Sudanic, had first noticed some specific correlations between Nobiin and Dinka (West Nilotic). Going through Nobiin data in +www⁄section III.2 +yields at least several phonetically and semantically close matches with West Nilotic, such as:

Additionally, Nobiin múg “dog” is similar to East Nilotic *-ŋɔk-38 and Kalenjin *ŋoːk,39 assuming the possibility of assimilation (*ŋ- → m- before a following labial vowel in Nobiin). These parallels, although still sparse, constitute by far the largest single group of matches between the “pre-Nile Nubian substrate” and a single linguistic family (Nilotic), making this line of future research seem promising for the future — although they neither conclusively prove the Nilotic nature of this substrate, nor eliminate the possibility of several substrate layers with different affiliation.

In any case, the main point of this paper is not so much to shed light on the origin of substrate elements in Nobiin as it is to show that pure lexicostatistics, when applied to complex cases of language relationship, may reveal anomalies that can only be resolved by means of a careful etymological analysis of the accumulated evidence. It is entirely possible that advanced character-based phylogenetic methods might offer additional insight into this problem, but ultimately it all comes down to resolving the problem by means of manual searching for cognates, albeit without forgetting about statistical grounding of the conclusions.

In this particular case, I believe that the evidence speaks strongly in favor of reinstating the Nile-Nubian clade comprising both Nobiin and Kenuzi-Dongolawi, although it must be kept in mind that a common linguistic ancestor and a common ethnic ancestor are not necessarily the same thing (e.g., the linguistic conclusion does not at all exclude the possibility that early speakers of Kenuzi-Dongolawi did shift to Proto-Nile-Nubian from some other language — not necessarily Nubian in origin itself).


  1. Bechhaus-Gerst, “Nile-Nubian Reconsidered,” p. 85. ↩︎

  2. Greenberg, The Languages of Africa, p. 84. ↩︎

  3. Bechhaus-Gerst, “Nile-Nubian Reconsidered”; Bechhaus-Gerst, Sprachwandel durch Sprachkontakt am Beispiel des Nubischen im Niltal; Bechhaus-Gerst, The (Hi)story of Nobiin. ↩︎

  4. Bechhaus-Gerst, Sprachwandel durch Sprachkontakt am Beispiel des Nubischen im Niltal, p. 88. ↩︎

  5. Bechhaus-Gerst, The (Hi)story of Nobiin, p. 22. ↩︎

  6. E.g., Heine & Kuteva, “Convergence and Divergence in the Development of African Languages.” ↩︎

  7. E.g., Jakobi, “The Loss of Syllable-final Proto-Nu­bian Consonants.” ↩︎

  8. Güldemann, “Historical Linguistics and Genealogical Language Classification in Africa,” p. 283. ↩︎

  9. Rilly, Le méroïtique et sa famille linguistique, pp. 211–288; Rilly, “Language and Ethnicity in Ancient Sudan,” pp. 1180–1183. We will return to Rillyʼs arguments in the final section of this paper. ↩︎

  10. Starostin, Jazyki Afriki, pp. 24–95. ↩︎

  11. Bechhaus-Gerst, “Nile-Nubian Reconsidered” ↩︎

  12. Storostin, Jazyki Afriki. ↩︎

  13. In this article, the following language abbreviations are used: +B — Birgid; +D — Dongolawi; +Dl — Dilling; +K — Kenuzi; +K/D — Kenuzi-Dongolawi; +M — Midob; +N — Nobiin; +ON — Old Nubian; +PN — Proto-Nubian. ↩︎

  14. One possible argument in this case would be to rely on data from external comparison. Thus, if we agree that Nubian belongs to the Northern branch of the Eastern Sudanic family, with the Nara language and the Taman group as its closest relatives (Rilly, Le méroïtique et sa famille linguistique; Starostin, Jazyki Afriki), then, in those cases where Nobiin data is opposed to the data of all other Nubian languages, it is the word that finds better etymological parallels in Nara and Tama that shouud be logically regarded as the Proto-Nubian equivalent. However, in order to avoid circularity or the additional problems that one runs into while investigating chronologically distant language relationship, I intentionally restrict the subject matter of this paper to internal Nubian data only. ↩︎

  15. Reasons of volume, unfortunately, do not allow to go into sufficient details on many of the more complicated cases. A subset of 50 words, representing the most stable (on average) Swadesh items, has been analyzed in detail and published (in Russian) in Starostin, Jazyki Afriki, pp. 224–95. A complete 100-item wordlist reconstructed for Proto-Nubian, with detailed notes on phonetics, semantics, and distribution, is scheduled to be added to the already available annotated 100-item wordlists for ten Nubian languages, published as part of +www⁄The Global Lexicostatistical Database +. ↩︎

  16. Note on the data sources: for reasons of volume, I do not include all available data in the etymologies. Nobiin (N) forms are quoted based on Werner’s Grammatik des Nobiin; if the word is missing from Wernerʼs relatively short glossary, additional forms may be drawn upon from either older sources, such as Lepsius’s Nubische Grammatik, or newer ones, e.g., Khalil’s Wörterbuch der nubischen Sprache (unfortunately, Khalilʼs dictionary is unusable as a lexicostatistical source due to its unwarranted omission of Arabic borrowings and conflation of various early sources). The ancient forms of Old Nubian (ON) are taken from Gerald Browneʼs Old Nubian Dictionary.

    Data on the other languages are taken from the most comprehensive published dictionaries, vocabularies, and/or wordlists and are quoted as follows: Kenuzi (K) — Hofmann, Nubisches Wörterverzeichnis; Dongolawi (D) — Armbruster, Dongolese Nubian; Midob (M) — Werner, Tìdn-áal; Birgid (B) — Thelwall, “A Birgid Vocabulary List”; Dilling (Dl) — Kauczor, Die Bergnubische Sprache. Hill Nubian data other than Dilling are used sparingly, only when it is necessary to specify the distribution of a given item; occasional forms from such languages as Kadaru, Debri, Karko, and Wali are quoted from wordlists published in Thelwall, “Lexicostatistical Relations between Nubian, Daju and Dinka” and Krell, Rapid Appraisal Sociolinguisyic Survey among Ama, Karko, and Wali Language Groups.

    Proto-Nubian forms are largely based on the system of correspondences that was originally laid out in Marianne Bechhaus-Gerstʼs reconstruction of Proto-Nubian phonology in “Sprachliche und his­torische Rekonstruktionen im Bereich des Nubischen unter beson­de­rer Berücksichtigung des Nilnubischen,” but with a number of emendations introduced in Starostin, Jazyki Afriki. Since this study is more concerned with issues of cognate distribution than those of phonological reconstruction and phonetic interpretation, I will refrain from reproducing full tables of phonetic correspondences, but brief notes on peculiarities of reflexes of certain PN phonemes in certain Nubian languages will be given for those cases where etymological cognacy is not obvious or is disputable from the standard viewpoint of the neogrammarian paradigm. ↩︎

  17. Bechhaus-Gerst, “Nile-Nubian Reconsidered,” p. 94 lists this as one of two examples illustrating the alleged archaicity of Old Nubian and Nobiin in retaining original PN *g-, together with ON gouwi “shield.” However, in both of these cases K/D also show k- (cf. K/D karu “shield”), which goes against regular correspondences for PN *g- (which should yield K/D g-, see “red”), meaning that it is Nobiin and not the other languages that actually have an innovation here. ↩︎

  18. Reconstruction somewhat uncertain, but initial *ŋ- is fairly clearly indicated by the correspondences; see detailed discussion in Starostin, Jazyki Afriki, pp. 56–57. ↩︎

  19. Bechhaus-Gerst, “Nile-Nubian Reconsidered,” p. 93 counts this as an additional slice of evidence for early separation of N, but since this is an innovation rather than an archaism, there are no arguments to assert that the innovation did not take place recently (already after the separation of N from K/D). ↩︎

  20. Hofmann, Material für eine Meroitische Gram­ma­tik, 86. ↩︎

  21. See the detailed discussion on this phonetically unusual root in Starostin, Jazyki Afriki, p. 80. ↩︎

  22. Bell, “Documentary Evidence on the Haraza Nubian Language,” p. 10. ↩︎

  23. Khalil, Wörterbuch der nubischen Spra­che, p. 124. ↩︎

  24. In Starostin, Jazyki Afriki, p. 92 I suggest that, since the regular reflex of PN *n- in Hill Nubian is d-, both Nile-Nubian *min and all the na(i)-like forms may go back to a unique PN stem *nwV-; if so, the word should be moved to +www⁄section I.1 +, but in any case this is still a common Nile-Nubian isogloss. ↩︎

  25. Werner. Grammatik des Nobiin, p. 357. ↩︎

  26. The meanings “sand; dust” are also indicated as primary for Nobiin iskid ~ iskit in Khalil, Wörterbuch der nubischen Spra­che, p. 48. ↩︎

  27. Krell. Rapid Appraisal Sociolinguistic Survey among Ama, Karko, and Wali Language Groups, p. 40. ↩︎

  28. As per Bechhaus-Gerst, “Nile-Nubian Reconsidered,” p. 93. ↩︎

  29. Lepsius, Nubische Grammatik, p. 274. ↩︎

  30. Where *-n is a productive plural marker, cf. Bechhaus-Gerst, “Sprachliche und his­torische Rekonstruktionen im Bereich des Nubischen unter beson­de­rer Berücksichtigung des Nilnubischen,” p. 109. ↩︎

  31. For a good typological analogy from a relatively nearby region, cf. the contact situation between Northern Songhay languages and Berber languages as described, e.g., in Souag, Grammatical Contact in the Sahara. ↩︎

  32. Rilly, Le méroïtique et sa famille linguistique, pp. 285–289, ↩︎

  33. Starostin, Jazyki Afriki, pp. 34–36; Vasilyey & Starostin, “Leksikostatisticheskaja klassifikatsija nubijskikh jazykov.” ↩︎

  34. For a detailed description of the StarlingNJ distance-based method of phylogenetic classification and linguistic dating, see Kassian, “Towards a Formal Genealogical Classification of the Lezgian Languages (North Caucasus).” ↩︎

  35. Rilly, Le méroïtique et sa famille linguistique, p. 285. ↩︎

  36. Rilly, “Language and Ethnicity in Ancient Sudan,” pp. 1181–1182. ↩︎

  37. Thelwall, “Lexicostatistical Relations be­twe­en Nu­bian, Daju and Dinka,” pp. 273–274. ↩︎

  38. Vossem, The Eastern Nilotes, p. 354. ↩︎

  39. Rottland, Die Südnilotischen Sprachen, p. 390. ↩︎

\ No newline at end of file diff --git a/public/article/the-article/index.html b/public/article/the-article/index.html deleted file mode 100644 index 686ad69..0000000 --- a/public/article/the-article/index.html +++ /dev/null @@ -1,5 +0,0 @@ -The Article - Dotawo Journal

The Article

article⁄The Article
in issues⁄

Some text.

\ No newline at end of file diff --git a/public/article/vangervenoei/index.html b/public/article/vangervenoei/index.html new file mode 100644 index 0000000..df9d118 --- /dev/null +++ b/public/article/vangervenoei/index.html @@ -0,0 +1,5 @@ +An Introduction to Northern East Sudanic Linguistics - Dotawo Journal

An Introduction to Northern East Sudanic Linguistics

article⁄An Introduction to Northern East Sudanic Linguistics
in issues⁄
\ No newline at end of file diff --git a/public/categories/index.html b/public/categories/index.html deleted file mode 100644 index 4b98f38..0000000 --- a/public/categories/index.html +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/public/classification.png b/public/classification.png new file mode 100644 index 0000000..8071cdc Binary files /dev/null and b/public/classification.png differ diff --git a/public/css/paged_interface.css b/public/css/paged_interface.css deleted file mode 100644 index 4450cf3..0000000 --- a/public/css/paged_interface.css +++ /dev/null @@ -1,104 +0,0 @@ -/* CSS for Paged.js interface */ - -/* Change the look */ -:root { - --color-background: whitesmoke; - --color-pageBox: #666; - --color-paper: white; - --color-marginBox: transparent; -} - -/* To define how the book look on the screen: */ -@media screen { - body { - background-color: var(--color-background); - } - - .pagedjs_pages { - display: flex; - width: calc(var(--pagedjs-width) * 2); - flex: 0; - flex-wrap: wrap; - margin: 0 auto; - } - - .pagedjs_page { - background-color: var(--color-paper); - box-shadow: 0 0 0 1px var(--color-pageBox); - margin: 0; - flex-shrink: 0; - flex-grow: 0; - margin-top: 10mm; - } - - .pagedjs_first_page { - margin-left: var(--pagedjs-width); - } - - .pagedjs_page:last-of-type { - margin-bottom: 10mm; - } - - /* show the margin-box */ - - .pagedjs_margin-top-left-corner-holder, - .pagedjs_margin-top, - .pagedjs_margin-top-left, - .pagedjs_margin-top-center, - .pagedjs_margin-top-right, - .pagedjs_margin-top-right-corner-holder, - .pagedjs_margin-bottom-left-corner-holder, - .pagedjs_margin-bottom, - .pagedjs_margin-bottom-left, - .pagedjs_margin-bottom-center, - .pagedjs_margin-bottom-right, - .pagedjs_margin-bottom-right-corner-holder, - .pagedjs_margin-right, - .pagedjs_margin-right-top, - .pagedjs_margin-right-middle, - .pagedjs_margin-right-bottom, - .pagedjs_margin-left, - .pagedjs_margin-left-top, - .pagedjs_margin-left-middle, - .pagedjs_margin-left-bottom { - box-shadow: 0 0 0 1px inset var(--color-marginBox); - } - - /* uncomment this part for recto/verso book : ------------------------------------ */ - - - .pagedjs_pages { - flex-direction: column; - width: 100%; - } - - .pagedjs_first_page { - margin-left: 0; - } - - .pagedjs_page { - margin: 0 auto; - margin-top: 10mm; - } - - /*--------------------------------------------------------------------------------------*/ - - - - /* uncomment this par to see the baseline : -------------------------------------------*/ - - /* - .pagedjs_pagebox { - --pagedjs-baseline: 11px; - --pagedjs-baseline-position: -4px; - --pagedjs-baseline-color: cyan; - background: linear-gradient(var(--color-paper) 0%, var(--color-paper) calc(var(--pagedjs-baseline) - 1px), var(--pagedjs-baseline-color) calc(var(--pagedjs-baseline) - 1px), var(--pagedjs-baseline-color) var(--pagedjs-baseline)), transparent; - background-size: 100% var(--pagedjs-baseline); - background-repeat: repeat-y; - background-position-y: var(--pagedjs-baseline-position); - } - */ - - /*--------------------------------------------------------------------------------------*/ -} - diff --git a/public/css/player.css b/public/css/player.css deleted file mode 100755 index 5d6025b..0000000 --- a/public/css/player.css +++ /dev/null @@ -1,84 +0,0 @@ -/* soundcite - v0.5.1 - 2017-07-10 - * Copyright (c) 2017 Tyler J. Fisher and Northwestern University Knight Lab - */ - -/*PLAYER CHROME*/ - -@-webkit-keyframes spin { - from { -webkit-transform: rotate(0deg); opacity: 0.4; } - 50% { -webkit-transform: rotate(180deg); opacity: 1; } - to { -webkit-transform: rotate(360deg); opacity: 0.4; } -} - -@-moz-keyframes spin { - from { -moz-transform: rotate(0deg); opacity: 0.4; } - 50% { -moz-transform: rotate(180deg); opacity: 1; } - to { -moz-transform: rotate(360deg); opacity: 0.4; } -} - -@-ms-keyframes spin { - from { -ms-transform: rotate(0deg); opacity: 0.4; } - 50% { -ms-transform: rotate(180deg); opacity: 1; } - to { -ms-transform: rotate(360deg); opacity: 0.4; } -} - -@-o-keyframes spin { - from { -o-transform: rotate(0deg); opacity: 0.4; } - 50% { -o-transform: rotate(180deg); opacity: 1; } - to { -o-transform: rotate(360deg); opacity: 0.4; } -} - -@keyframes spin { - from { transform: rotate(0deg); opacity: 0.2; } - 50% { transform: rotate(180deg); opacity: 1; } - to { transform: rotate(360deg); opacity: 0.2; } -} - -.soundcite-loaded { - border-radius: 6px; - padding: 0 5px 0 5px; - display: inline-block; - cursor: pointer; -} - -.soundcite-loaded:before { - display: inline-block; - content: ""; - vertical-align: -10%; - margin-right: 0.25em; -} - -.soundcite-loading:before { - margin-right: 0.5em; - font-size: 0.9em; - position: relative; - top: -.05em; - height: 0.75em; - width: 0.75em; - border: 2px solid #000; - border-right-color: transparent; - border-radius: 50%; - -webkit-animation: spin 1s linear infinite; - -moz-animation: spin 1s linear infinite; - -ms-animation: spin 1s linear infinite; - -o-animation: spin 1s linear infinite; - animation: spin 1s linear infinite; -} - -.soundcite-play:before { - font-size: 0.9em; - position: relative; - top: -.05em; - border: 0.5em solid transparent; - border-left: 0.75em solid black; -} - -.soundcite-pause:before { - font-size: 0.9em; - position: relative; - top: -.05em; - height: 1em; - border-left: .75em double black; - border-right: .5em solid transparent; -} - diff --git a/public/css/player.min.css b/public/css/player.min.css deleted file mode 100644 index 0bb9d03..0000000 --- a/public/css/player.min.css +++ /dev/null @@ -1 +0,0 @@ -@-webkit-keyframes spin{from{-webkit-transform:rotate(0deg);opacity:.4}50%{-webkit-transform:rotate(180deg);opacity:1}to{-webkit-transform:rotate(360deg);opacity:.4}}@keyframes spin{from{transform:rotate(0deg);opacity:.2}50%{transform:rotate(180deg);opacity:1}to{transform:rotate(360deg);opacity:.2}}.soundcite-loaded{border-radius:6px;padding:0 5px;display:inline-block;cursor:pointer}.soundcite-loaded:before{display:inline-block;content:"";vertical-align:-10%;margin-right:.25em}.soundcite-loading:before{margin-right:.5em;font-size:.9em;position:relative;top:-.05em;height:.75em;width:.75em;border:2px solid #000;border-right-color:transparent;border-radius:50%;-webkit-animation:spin 1s linear infinite;animation:spin 1s linear infinite}.soundcite-play:before{font-size:.9em;position:relative;top:-.05em;border:.5em solid transparent;border-left:.75em solid #000}.soundcite-pause:before{font-size:.9em;position:relative;top:-.05em;height:1em;border-left:.75em double #000;border-right:.5em solid transparent} \ No newline at end of file diff --git a/public/css/site.min.css b/public/css/site.min.css deleted file mode 100644 index aacfdd3..0000000 --- a/public/css/site.min.css +++ /dev/null @@ -1 +0,0 @@ -/*!normalize.css v8.0.1 | MIT License | github.com/necolas/normalize.css*/html{line-height:1.4;-webkit-text-size-adjust:100%}body{margin:0}main{display:block}h1{font-size:2em;margin:.67em 0}hr{box-sizing:content-box;height:0;overflow:hidden;border-width:0}pre{font-family:monospace,monospace;font-size:1em}a{background-color:transparent}abbr[title]{border-bottom:none;text-decoration:underline;-webkit-text-decoration:underline dotted;text-decoration:underline dotted}b,strong{font-weight:bolder}code,kbd,samp{font-family:monospace,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}img{border-style:none}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;line-height:1.15;margin:0}button,input{overflow:visible}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button}button::-moz-focus-inner,[type=button]::-moz-focus-inner,[type=reset]::-moz-focus-inner,[type=submit]::-moz-focus-inner{border-style:none;padding:0}button:-moz-focusring,[type=button]:-moz-focusring,[type=reset]:-moz-focusring,[type=submit]:-moz-focusring{outline:1px dotted ButtonText}fieldset{padding:.35em .75em .625em}legend{box-sizing:border-box;color:inherit;display:table;max-width:100%;padding:0;white-space:normal}progress{vertical-align:baseline}textarea{overflow:auto}[type=checkbox],[type=radio]{box-sizing:border-box;padding:0}[type=number]::-webkit-inner-spin-button,[type=number]::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}[type=search]::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}details{display:block}summary{display:list-item}template{display:none}[hidden]{display:none}@font-face{font-family:vollkorn;font-style:italic;font-weight:500;src:url(../fonts/vollkorn-v12-latin-ext_latin-500italic.eot);src:local(''),url(../fonts/vollkorn-v12-latin-ext_latin-500italic.eot?#iefix)format('embedded-opentype'),url(../fonts/vollkorn-v12-latin-ext_latin-500italic.woff2)format('woff2'),url(../fonts/vollkorn-v12-latin-ext_latin-500italic.woff)format('woff'),url(../fonts/vollkorn-v12-latin-ext_latin-500italic.ttf)format('truetype'),url(../fonts/vollkorn-v12-latin-ext_latin-500italic.svg#Vollkorn)format('svg')}@font-face{font-family:vollkorn;font-style:normal;font-weight:400;src:url(../fonts/vollkorn-v12-latin-ext_latin-regular.eot);src:local(''),url(../fonts/vollkorn-v12-latin-ext_latin-regular.eot?#iefix)format('embedded-opentype'),url(../fonts/vollkorn-v12-latin-ext_latin-regular.woff2)format('woff2'),url(../fonts/vollkorn-v12-latin-ext_latin-regular.woff)format('woff'),url(../fonts/vollkorn-v12-latin-ext_latin-regular.ttf)format('truetype'),url(../fonts/vollkorn-v12-latin-ext_latin-regular.svg#Vollkorn)format('svg')}@font-face{font-family:vollkorn;font-style:normal;font-weight:500;src:url(../fonts/vollkorn-v12-latin-ext_latin-500.eot);src:local(''),url(../fonts/vollkorn-v12-latin-ext_latin-500.eot?#iefix)format('embedded-opentype'),url(../fonts/vollkorn-v12-latin-ext_latin-500.woff2)format('woff2'),url(../fonts/vollkorn-v12-latin-ext_latin-500.woff)format('woff'),url(../fonts/vollkorn-v12-latin-ext_latin-500.ttf)format('truetype'),url(../fonts/vollkorn-v12-latin-ext_latin-500.svg#Vollkorn)format('svg')}@font-face{font-family:vollkorn;font-style:italic;font-weight:400;src:url(../fonts/vollkorn-v12-latin-ext_latin-italic.eot);src:local(''),url(../fonts/vollkorn-v12-latin-ext_latin-italic.eot?#iefix)format('embedded-opentype'),url(../fonts/vollkorn-v12-latin-ext_latin-italic.woff2)format('woff2'),url(../fonts/vollkorn-v12-latin-ext_latin-italic.woff)format('woff'),url(../fonts/vollkorn-v12-latin-ext_latin-italic.ttf)format('truetype'),url(../fonts/vollkorn-v12-latin-ext_latin-italic.svg#Vollkorn)format('svg')}@font-face{font-family:great vibes;font-style:normal;font-weight:400;src:url(../fonts/great-vibes-v7-latin-ext_latin-regular.eot);src:local('Great Vibes'),local('GreatVibes-Regular'),url(../fonts/great-vibes-v7-latin-ext_latin-regular.eot?#iefix)format('embedded-opentype'),url(../fonts/great-vibes-v7-latin-ext_latin-regular.woff2)format('woff2'),url(../fonts/great-vibes-v7-latin-ext_latin-regular.woff)format('woff'),url(../fonts/great-vibes-v7-latin-ext_latin-regular.ttf)format('truetype'),url(../fonts/great-vibes-v7-latin-ext_latin-regular.svg#GreatVibes)format('svg')}body{font-family:vollkorn,serif;font-weight:400;max-width:1024px;font-size:1.4rem;padding-left:3rem;padding-top:.5rem;padding-right:2rem;color:#000}.headiter{display:flex;justify-content:space-between;border-bottom:.3rem solid red;border-top:.1rem solid red;padding-top:1rem;padding-bottom:.5rem;padding-left:1rem;padding-right:1rem;margin-bottom:1rem}a.edit-button{color:#000;text-decoration:none}a.edit-button:hover{text-decoration:underline;-webkit-text-decoration-color:red;text-decoration-color:red;font-weight:500;font-size:.97em;cursor:pointer}.header{display:-ms-grid;display:grid;-ms-grid-columns:auto;grid-template-columns:auto;grid-auto-flow:column;position:-webkit-sticky;position:sticky;top:0;padding-top:.5rem;font-size:1.1rem;background-color:#fff;z-index:10}.mantlebar{display:none}.breadcrumbs{-ms-grid-column:1;grid-column:1}.bibliotheke{width:2.4rem;-ms-grid-column:2;grid-column:2;margin-left:auto}.bibliotheke:hover{text-decoration:none;font-size:1rem}.alsoin{font-weight:700;display:inline-flex;padding-left:.2rem;padding-right:.3rem}.ddmenu{cursor:pointer;display:-ms-inline-grid;display:inline-grid}.ddmenu input{display:none}.ddmenu li{list-style-type:none}.ddmenu .hiddendiv{display:none;margin-top:-.9rem}input:focus,select:focus,textarea:focus,label:focus,button:focus{outline:none}.ddmenu input:checked~.hiddendiv{display:block}.logolink{width:2.4rem;margin-left:auto}.sandpointlogo{border-radius:50%;border:.3em solid red;display:inline-flex;font-family:great vibes,cursive;font-size:1.2rem;font-weight:700;color:#fff;background-color:red;position:relative;padding-left:1em;padding-right:.3em;padding-bottom:1.2em;max-width:0;max-height:0}.sandpointF{position:absolute;font-size:1.2em;margin-top:-.2em;margin-left:-.4em}.sandpointN{position:absolute;font-size:.7em;margin-top:.4em;color:red;margin-left:-.5em}.sandpointC{position:absolute;font-size:1em;margin-left:-.37em}footer{display:flex;margin-bottom:1rem;margin-top:4rem}li{list-style-type:"› "}li[role=doc-endnote]{list-style-type:decimal}ol li{list-style-type:decimal}img{width:100%}.sup{display:inline-flex;flex-wrap:nowrap;font-family:great vibes,cursive;font-size:.9em;font-style:normal;color:red;vertical-align:baseline;position:relative;top:-.3em}.sup:hover{font-weight:400!important}.nosup a{color:red}.nosup a span.sup{display:none}.syllabustitle,.coretitle{margin-top:4rem;font-style:italic;font-size:4rem;margin-bottom:4rem}.mantletitle,.crusttitle{margin-top:4rem;font-style:italic;font-size:2rem;margin-bottom:4rem}.has,.afterhas{font-style:italic;font-size:1.4rem}.grid{padding-left:3rem;margin-bottom:4rem;display:-ms-grid;display:grid;-ms-grid-columns:1fr 1fr;grid-template-columns:1fr 1fr}.leftcolumn{-ms-grid-column:1;grid-column:1;padding-right:1.2rem;position:relative}#TableOfContents{margin-left:-2rem;margin-top:-1.5rem}#TableOfContents li{list-style:decimal inside;padding-left:.5rem;color:rgba(0,0,0,.4);margin:0;padding:0}#TableOfContents li a{font-family:vollkorn;font-size:1.4rem;font-weight:500;font-style:normal}#TableOfContents>ol:first-child>span>li>a::after{content:"¶";color:rgba(0,0,0,.2);font-size:1.3em;padding-left:.1em}.rightcolumn{-ms-grid-column:2;grid-column:2;display:-ms-grid;display:grid;-ms-grid-columns:auto 1fr;grid-template-columns:auto 1fr;padding-left:.5rem}.has{-ms-grid-column:1;grid-column:1}.content{margin-top:4rem;padding-left:3rem;padding-right:10rem;max-width:720px}a{color:#000;text-decoration:none}a:hover{text-decoration:underline;-webkit-text-decoration-color:red;text-decoration-color:red;font-weight:500;font-size:.97em;cursor:pointer}h1{display:none}h2{font-weight:500;font-size:1.6rem}h3{font-weight:500;font-size:1.4rem}h4{font-weight:500;font-size:1.2rem}h5{font-weight:500;font-size:1.1rem}h6{font-weight:500;font-size:1rem}blockquote{font-style:italic;border-left:1px red solid;padding-left:1rem;padding-right:2rem;padding-top:.1rem;padding-bottom:.1rem;background-color:#fff9f9}h2+p:first-letter{font-family:great vibes,cursive;color:rgba(0,0,0,.8);font-size:1.5em;line-height:0}.hx{position:relative}.hpar{position:absolute;font-size:1.7em;font-style:normal;color:rgba(0,0,0,.1);top:0;margin-top:-.4em}body{counter-reset:h2}h2{counter-reset:h3}h3{counter-reset:h4}h4{counter-reset:h5}h5{counter-reset:h6}h2:before{color:rgba(0,0,0,.4);counter-increment:h2;content:counter(h2)". "}h3:before{color:rgba(0,0,0,.4);counter-increment:h3;content:counter(h2)"." counter(h3)". "}h4:before{color:rgba(0,0,0,.4);counter-increment:h4;content:counter(h2)"." counter(h3)"." counter(h4)". "}h5:before{color:rgba(0,0,0,.4);counter-increment:h5;content:counter(h2)"." counter(h3)"." counter(h4)"." counter(h5)". "}h6:before{color:rgba(0,0,0,.4);counter-increment:h6;content:counter(h2)"." counter(h3)"." counter(h4)"." counter(h5)"." counter(h6)". "}h2.nocount:before,h3.nocount:before,h4.nocount:before,h5.nocount:before,h6.nocount:before{content:"";counter-increment:none}details{padding-left:1rem}#toggleAllLinks{font-size:.8rem}#toggleAllLinks:hover{cursor:pointer;text-decoration:underline;-webkit-text-decoration-color:red;text-decoration-color:red}@media(max-width:767px){html{box-sizing:border-box;max-width:767px;margin:0 auto;padding:0}body{font-size:1rem;padding-left:10px;padding-right:10px}.grid{display:inline}.coretitle{margin-top:3rem;margin-bottom:2rem;font-size:2.2rem}.leftcolumn{display:none}.has,.afterhas{font-size:1.1rem}.mantle,.crust{margin-bottom:.2em}.content{padding:0;margin:0 auto;margin-top:4rem}.hpar{display:none}.mantletitle{font-size:1.7rem}.sessiongrid{display:none}} \ No newline at end of file diff --git a/public/favicon-16x16.png b/public/favicon-16x16.png deleted file mode 100644 index 004b99f..0000000 Binary files a/public/favicon-16x16.png and /dev/null differ diff --git a/public/favicon-32x32.png b/public/favicon-32x32.png deleted file mode 100644 index a041b71..0000000 Binary files a/public/favicon-32x32.png and /dev/null differ diff --git a/public/favicon.ico b/public/favicon.ico deleted file mode 100644 index 98b8f24..0000000 Binary files a/public/favicon.ico and /dev/null differ diff --git a/public/fonts/great-vibes-v7-latin-ext_latin-regular.eot b/public/fonts/great-vibes-v7-latin-ext_latin-regular.eot deleted file mode 100644 index a1fd8f4..0000000 Binary files a/public/fonts/great-vibes-v7-latin-ext_latin-regular.eot and /dev/null differ diff --git a/public/fonts/great-vibes-v7-latin-ext_latin-regular.svg b/public/fonts/great-vibes-v7-latin-ext_latin-regular.svg deleted file mode 100644 index fd0ec65..0000000 --- a/public/fonts/great-vibes-v7-latin-ext_latin-regular.svg +++ /dev/null @@ -1,581 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/public/fonts/great-vibes-v7-latin-ext_latin-regular.ttf b/public/fonts/great-vibes-v7-latin-ext_latin-regular.ttf deleted file mode 100644 index d3f4bca..0000000 Binary files a/public/fonts/great-vibes-v7-latin-ext_latin-regular.ttf and /dev/null differ diff --git a/public/fonts/great-vibes-v7-latin-ext_latin-regular.woff b/public/fonts/great-vibes-v7-latin-ext_latin-regular.woff deleted file mode 100644 index 5b46304..0000000 Binary files a/public/fonts/great-vibes-v7-latin-ext_latin-regular.woff and /dev/null differ diff --git a/public/fonts/great-vibes-v7-latin-ext_latin-regular.woff2 b/public/fonts/great-vibes-v7-latin-ext_latin-regular.woff2 deleted file mode 100644 index 6442cfa..0000000 Binary files a/public/fonts/great-vibes-v7-latin-ext_latin-regular.woff2 and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-500.eot b/public/fonts/vollkorn-v12-latin-ext_latin-500.eot deleted file mode 100644 index 4b648de..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-500.eot and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-500.svg b/public/fonts/vollkorn-v12-latin-ext_latin-500.svg deleted file mode 100644 index 519081e..0000000 --- a/public/fonts/vollkorn-v12-latin-ext_latin-500.svg +++ /dev/null @@ -1,516 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-500.ttf b/public/fonts/vollkorn-v12-latin-ext_latin-500.ttf deleted file mode 100644 index af39216..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-500.ttf and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-500.woff b/public/fonts/vollkorn-v12-latin-ext_latin-500.woff deleted file mode 100644 index efe0e72..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-500.woff and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-500.woff2 b/public/fonts/vollkorn-v12-latin-ext_latin-500.woff2 deleted file mode 100644 index 058eb2e..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-500.woff2 and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-500italic.eot b/public/fonts/vollkorn-v12-latin-ext_latin-500italic.eot deleted file mode 100644 index 1985a5d..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-500italic.eot and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-500italic.svg b/public/fonts/vollkorn-v12-latin-ext_latin-500italic.svg deleted file mode 100644 index edcacb9..0000000 --- a/public/fonts/vollkorn-v12-latin-ext_latin-500italic.svg +++ /dev/null @@ -1,511 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-500italic.ttf b/public/fonts/vollkorn-v12-latin-ext_latin-500italic.ttf deleted file mode 100644 index 66fab01..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-500italic.ttf and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-500italic.woff b/public/fonts/vollkorn-v12-latin-ext_latin-500italic.woff deleted file mode 100644 index ab8712e..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-500italic.woff and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-500italic.woff2 b/public/fonts/vollkorn-v12-latin-ext_latin-500italic.woff2 deleted file mode 100644 index 9ff6246..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-500italic.woff2 and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-italic.eot b/public/fonts/vollkorn-v12-latin-ext_latin-italic.eot deleted file mode 100644 index cd501a6..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-italic.eot and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-italic.svg b/public/fonts/vollkorn-v12-latin-ext_latin-italic.svg deleted file mode 100644 index 7a47d67..0000000 --- a/public/fonts/vollkorn-v12-latin-ext_latin-italic.svg +++ /dev/null @@ -1,511 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-italic.ttf b/public/fonts/vollkorn-v12-latin-ext_latin-italic.ttf deleted file mode 100644 index ce0bcb1..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-italic.ttf and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-italic.woff b/public/fonts/vollkorn-v12-latin-ext_latin-italic.woff deleted file mode 100644 index 1388686..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-italic.woff and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-italic.woff2 b/public/fonts/vollkorn-v12-latin-ext_latin-italic.woff2 deleted file mode 100644 index 74a8889..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-italic.woff2 and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-regular.eot b/public/fonts/vollkorn-v12-latin-ext_latin-regular.eot deleted file mode 100644 index 9185676..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-regular.eot and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-regular.svg b/public/fonts/vollkorn-v12-latin-ext_latin-regular.svg deleted file mode 100644 index c411210..0000000 --- a/public/fonts/vollkorn-v12-latin-ext_latin-regular.svg +++ /dev/null @@ -1,516 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-regular.ttf b/public/fonts/vollkorn-v12-latin-ext_latin-regular.ttf deleted file mode 100644 index c30f35b..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-regular.ttf and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-regular.woff b/public/fonts/vollkorn-v12-latin-ext_latin-regular.woff deleted file mode 100644 index aa06e7b..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-regular.woff and /dev/null differ diff --git a/public/fonts/vollkorn-v12-latin-ext_latin-regular.woff2 b/public/fonts/vollkorn-v12-latin-ext_latin-regular.woff2 deleted file mode 100644 index a7c150f..0000000 Binary files a/public/fonts/vollkorn-v12-latin-ext_latin-regular.woff2 and /dev/null differ diff --git a/public/images/UNS-logo.png b/public/images/UNS-logo.png new file mode 100644 index 0000000..ffb05b9 Binary files /dev/null and b/public/images/UNS-logo.png differ diff --git a/public/images/bechhaus.png b/public/images/bechhaus.png new file mode 100644 index 0000000..7d0d363 Binary files /dev/null and b/public/images/bechhaus.png differ diff --git a/public/images/bibliotheke.svg b/public/images/bibliotheke.svg deleted file mode 100644 index 526611e..0000000 --- a/public/images/bibliotheke.svg +++ /dev/null @@ -1,187 +0,0 @@ - - - - - - - - - - - image/svg+xml - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/public/images/classification.png b/public/images/classification.png new file mode 100644 index 0000000..8071cdc Binary files /dev/null and b/public/images/classification.png differ diff --git a/public/index.html b/public/index.html deleted file mode 100644 index 0e4755a..0000000 --- a/public/index.html +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/public/issue/dotawo7/index.html b/public/issue/dotawo7/index.html deleted file mode 100644 index 492ba2c..0000000 --- a/public/issue/dotawo7/index.html +++ /dev/null @@ -1,6 +0,0 @@ -Dotawo 7: Comparative Northern East Sudanic Linguistics - Dotawo Journal

Dotawo 7: Comparative Northern East Sudanic Linguistics

issue⁄Dotawo 7: Comparative Northern East Sudanic Linguistics
has articles⁄

The seventh issue of Dotawo is dedicated to Comparative Northern East Sudanic linguistics, offering new insights in the historical connections between the Nubian languages and other members of the NES family such as Nyimang, Tama, Nara, and Meroitic. A special focus is placed on comparative morphology.

\ No newline at end of file diff --git a/public/issue/index.html b/public/issue/index.html deleted file mode 100644 index edb6cec..0000000 --- a/public/issue/index.html +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/public/issue/the-issue/index.html b/public/issue/the-issue/index.html deleted file mode 100644 index 726c4a8..0000000 --- a/public/issue/the-issue/index.html +++ /dev/null @@ -1,4 +0,0 @@ -The Issue - Dotawo Journal

The Issue

issue⁄The Issue
has articles⁄

Some text.

\ No newline at end of file diff --git a/public/journal/index.html b/public/journal/index.html deleted file mode 100644 index c3bc798..0000000 --- a/public/journal/index.html +++ /dev/null @@ -1,5 +0,0 @@ -Dotawo Journal - Dotawo Journal

Dotawo Journal

journal⁄Dotawo Journal
has issues⁄

Nubian studies needs a platform in which the old meets the new, in which archaeological, historical, and philological research into Meroitic, Old Nubian, Coptic, Greek, and Arabic sources confront current investigations in modern anthropology and ethnography, Nilo-Saharan linguistics, and critical and theoretical approaches present in postcolonial and African studies.

The journal Dotawo: A Journal of Nubian Studies brings these disparate fields together within the same fold, opening a cross-cultural and diachronic field where divergent approaches meet on common soil. Dotawo gives a common home to the past, present, and future of one of the richest areas of research in African studies. It offers a crossroads where papyrus can meet internet, scribes meet critical thinkers, and the promises of growing nations meet the accomplishments of old kingdoms.

\ No newline at end of file diff --git a/public/js/paged.polyfill.js b/public/js/paged.polyfill.js deleted file mode 100644 index 1b7048a..0000000 --- a/public/js/paged.polyfill.js +++ /dev/null @@ -1,30231 +0,0 @@ -(function (global, factory) { - typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() : - typeof define === 'function' && define.amd ? define(factory) : - (global = global || self, global.PagedPolyfill = factory()); -}(this, (function () { 'use strict'; - - function createCommonjsModule(fn, module) { - return module = { exports: {} }, fn(module, module.exports), module.exports; - } - - function getCjsExportFromNamespace (n) { - return n && n['default'] || n; - } - - var isImplemented = function () { - var assign = Object.assign, obj; - if (typeof assign !== "function") return false; - obj = { foo: "raz" }; - assign(obj, { bar: "dwa" }, { trzy: "trzy" }); - return (obj.foo + obj.bar + obj.trzy) === "razdwatrzy"; - }; - - var isImplemented$1 = function () { - try { - Object.keys("primitive"); - return true; - } catch (e) { - return false; - } - }; - - // eslint-disable-next-line no-empty-function - var noop = function () {}; - - var _undefined = noop(); // Support ES3 engines - - var isValue = function (val) { - return (val !== _undefined) && (val !== null); - }; - - var keys = Object.keys; - - var shim = function (object) { - return keys(isValue(object) ? Object(object) : object); - }; - - var keys$1 = isImplemented$1() - ? Object.keys - : shim; - - var validValue = function (value) { - if (!isValue(value)) throw new TypeError("Cannot use null or undefined"); - return value; - }; - - var max = Math.max; - - var shim$1 = function (dest, src /*, …srcn*/) { - var error, i, length = max(arguments.length, 2), assign; - dest = Object(validValue(dest)); - assign = function (key) { - try { - dest[key] = src[key]; - } catch (e) { - if (!error) error = e; - } - }; - for (i = 1; i < length; ++i) { - src = arguments[i]; - keys$1(src).forEach(assign); - } - if (error !== undefined) throw error; - return dest; - }; - - var assign = isImplemented() - ? Object.assign - : shim$1; - - var forEach = Array.prototype.forEach, create = Object.create; - - var process = function (src, obj) { - var key; - for (key in src) obj[key] = src[key]; - }; - - // eslint-disable-next-line no-unused-vars - var normalizeOptions = function (opts1 /*, …options*/) { - var result = create(null); - forEach.call(arguments, function (options) { - if (!isValue(options)) return; - process(Object(options), result); - }); - return result; - }; - - // Deprecated - - var isCallable = function (obj) { - return typeof obj === "function"; - }; - - var str = "razdwatrzy"; - - var isImplemented$2 = function () { - if (typeof str.contains !== "function") return false; - return (str.contains("dwa") === true) && (str.contains("foo") === false); - }; - - var indexOf = String.prototype.indexOf; - - var shim$2 = function (searchString/*, position*/) { - return indexOf.call(this, searchString, arguments[1]) > -1; - }; - - var contains = isImplemented$2() - ? String.prototype.contains - : shim$2; - - var d_1 = createCommonjsModule(function (module) { - - var d; - - d = module.exports = function (dscr, value/*, options*/) { - var c, e, w, options, desc; - if ((arguments.length < 2) || (typeof dscr !== 'string')) { - options = value; - value = dscr; - dscr = null; - } else { - options = arguments[2]; - } - if (dscr == null) { - c = w = true; - e = false; - } else { - c = contains.call(dscr, 'c'); - e = contains.call(dscr, 'e'); - w = contains.call(dscr, 'w'); - } - - desc = { value: value, configurable: c, enumerable: e, writable: w }; - return !options ? desc : assign(normalizeOptions(options), desc); - }; - - d.gs = function (dscr, get, set/*, options*/) { - var c, e, options, desc; - if (typeof dscr !== 'string') { - options = set; - set = get; - get = dscr; - dscr = null; - } else { - options = arguments[3]; - } - if (get == null) { - get = undefined; - } else if (!isCallable(get)) { - options = get; - get = set = undefined; - } else if (set == null) { - set = undefined; - } else if (!isCallable(set)) { - options = set; - set = undefined; - } - if (dscr == null) { - c = true; - e = false; - } else { - c = contains.call(dscr, 'c'); - e = contains.call(dscr, 'e'); - } - - desc = { get: get, set: set, configurable: c, enumerable: e }; - return !options ? desc : assign(normalizeOptions(options), desc); - }; - }); - - var validCallable = function (fn) { - if (typeof fn !== "function") throw new TypeError(fn + " is not a function"); - return fn; - }; - - var eventEmitter = createCommonjsModule(function (module, exports) { - - var apply = Function.prototype.apply, call = Function.prototype.call - , create = Object.create, defineProperty = Object.defineProperty - , defineProperties = Object.defineProperties - , hasOwnProperty = Object.prototype.hasOwnProperty - , descriptor = { configurable: true, enumerable: false, writable: true } - - , on, once, off, emit, methods, descriptors, base; - - on = function (type, listener) { - var data; - - validCallable(listener); - - if (!hasOwnProperty.call(this, '__ee__')) { - data = descriptor.value = create(null); - defineProperty(this, '__ee__', descriptor); - descriptor.value = null; - } else { - data = this.__ee__; - } - if (!data[type]) data[type] = listener; - else if (typeof data[type] === 'object') data[type].push(listener); - else data[type] = [data[type], listener]; - - return this; - }; - - once = function (type, listener) { - var once, self; - - validCallable(listener); - self = this; - on.call(this, type, once = function () { - off.call(self, type, once); - apply.call(listener, this, arguments); - }); - - once.__eeOnceListener__ = listener; - return this; - }; - - off = function (type, listener) { - var data, listeners, candidate, i; - - validCallable(listener); - - if (!hasOwnProperty.call(this, '__ee__')) return this; - data = this.__ee__; - if (!data[type]) return this; - listeners = data[type]; - - if (typeof listeners === 'object') { - for (i = 0; (candidate = listeners[i]); ++i) { - if ((candidate === listener) || - (candidate.__eeOnceListener__ === listener)) { - if (listeners.length === 2) data[type] = listeners[i ? 0 : 1]; - else listeners.splice(i, 1); - } - } - } else { - if ((listeners === listener) || - (listeners.__eeOnceListener__ === listener)) { - delete data[type]; - } - } - - return this; - }; - - emit = function (type) { - var i, l, listener, listeners, args; - - if (!hasOwnProperty.call(this, '__ee__')) return; - listeners = this.__ee__[type]; - if (!listeners) return; - - if (typeof listeners === 'object') { - l = arguments.length; - args = new Array(l - 1); - for (i = 1; i < l; ++i) args[i - 1] = arguments[i]; - - listeners = listeners.slice(); - for (i = 0; (listener = listeners[i]); ++i) { - apply.call(listener, this, args); - } - } else { - switch (arguments.length) { - case 1: - call.call(listeners, this); - break; - case 2: - call.call(listeners, this, arguments[1]); - break; - case 3: - call.call(listeners, this, arguments[1], arguments[2]); - break; - default: - l = arguments.length; - args = new Array(l - 1); - for (i = 1; i < l; ++i) { - args[i - 1] = arguments[i]; - } - apply.call(listeners, this, args); - } - } - }; - - methods = { - on: on, - once: once, - off: off, - emit: emit - }; - - descriptors = { - on: d_1(on), - once: d_1(once), - off: d_1(off), - emit: d_1(emit) - }; - - base = defineProperties({}, descriptors); - - module.exports = exports = function (o) { - return (o == null) ? create(base) : defineProperties(Object(o), descriptors); - }; - exports.methods = methods; - }); - var eventEmitter_1 = eventEmitter.methods; - - /** - * Hooks allow for injecting functions that must all complete in order before finishing - * They will execute in parallel but all must finish before continuing - * Functions may return a promise if they are asycn. - * From epubjs/src/utils/hooks - * @param {any} context scope of this - * @example this.content = new Hook(this); - */ - class Hook { - constructor(context){ - this.context = context || this; - this.hooks = []; - } - - /** - * Adds a function to be run before a hook completes - * @example this.content.register(function(){...}); - * @return {undefined} void - */ - register(){ - for(var i = 0; i < arguments.length; ++i) { - if (typeof arguments[i] === "function") { - this.hooks.push(arguments[i]); - } else { - // unpack array - for(var j = 0; j < arguments[i].length; ++j) { - this.hooks.push(arguments[i][j]); - } - } - } - } - - /** - * Triggers a hook to run all functions - * @example this.content.trigger(args).then(function(){...}); - * @return {Promise} results - */ - trigger(){ - var args = arguments; - var context = this.context; - var promises = []; - - this.hooks.forEach(function(task) { - var executing = task.apply(context, args); - - if(executing && typeof executing["then"] === "function") { - // Task is a function that returns a promise - promises.push(executing); - } - // Otherwise Task resolves immediately, add resolved promise with result - promises.push(new Promise((resolve, reject) => { - resolve(executing); - })); - }); - - - return Promise.all(promises); - } - - /** - * Triggers a hook to run all functions synchronously - * @example this.content.trigger(args).then(function(){...}); - * @return {Array} results - */ - triggerSync(){ - var args = arguments; - var context = this.context; - var results = []; - - this.hooks.forEach(function(task) { - var executing = task.apply(context, args); - - results.push(executing); - }); - - - return results; - } - - // Adds a function to be run before a hook completes - list(){ - return this.hooks; - } - - clear(){ - return this.hooks = []; - } - } - - function getBoundingClientRect(element) { - if (!element) { - return; - } - let rect; - if (typeof element.getBoundingClientRect !== "undefined") { - rect = element.getBoundingClientRect(); - } else { - let range = document.createRange(); - range.selectNode(element); - rect = range.getBoundingClientRect(); - } - return rect; - } - - function getClientRects(element) { - if (!element) { - return; - } - let rect; - if (typeof element.getClientRects !== "undefined") { - rect = element.getClientRects(); - } else { - let range = document.createRange(); - range.selectNode(element); - rect = range.getClientRects(); - } - return rect; - } - - /** - * Generates a UUID - * based on: http://stackoverflow.com/questions/105034/how-to-create-a-guid-uuid-in-javascript - * @returns {string} uuid - */ - function UUID() { - var d = new Date().getTime(); - if (typeof performance !== "undefined" && typeof performance.now === "function") { - d += performance.now(); //use high-precision timer if available - } - return "xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx".replace(/[xy]/g, function (c) { - var r = (d + Math.random() * 16) % 16 | 0; - d = Math.floor(d / 16); - return (c === "x" ? r : (r & 0x3 | 0x8)).toString(16); - }); - } - - function attr(element, attributes) { - for (var i = 0; i < attributes.length; i++) { - if (element.hasAttribute(attributes[i])) { - return element.getAttribute(attributes[i]); - } - } - } - - /* Based on by https://mths.be/cssescape v1.5.1 by @mathias | MIT license - * Allows # and . - */ - function querySelectorEscape(value) { - if (arguments.length == 0) { - throw new TypeError("`CSS.escape` requires an argument."); - } - var string = String(value); - - var length = string.length; - var index = -1; - var codeUnit; - var result = ""; - var firstCodeUnit = string.charCodeAt(0); - while (++index < length) { - codeUnit = string.charCodeAt(index); - - - - // Note: there’s no need to special-case astral symbols, surrogate - // pairs, or lone surrogates. - - // If the character is NULL (U+0000), then the REPLACEMENT CHARACTER - // (U+FFFD). - if (codeUnit == 0x0000) { - result += "\uFFFD"; - continue; - } - - if ( - // If the character is in the range [\1-\1F] (U+0001 to U+001F) or is - // U+007F, […] - (codeUnit >= 0x0001 && codeUnit <= 0x001F) || codeUnit == 0x007F || - // If the character is the first character and is in the range [0-9] - // (U+0030 to U+0039), […] - (index == 0 && codeUnit >= 0x0030 && codeUnit <= 0x0039) || - // If the character is the second character and is in the range [0-9] - // (U+0030 to U+0039) and the first character is a `-` (U+002D), […] - ( - index == 1 && - codeUnit >= 0x0030 && codeUnit <= 0x0039 && - firstCodeUnit == 0x002D - ) - ) { - // https://drafts.csswg.org/cssom/#escape-a-character-as-code-point - result += "\\" + codeUnit.toString(16) + " "; - continue; - } - - if ( - // If the character is the first character and is a `-` (U+002D), and - // there is no second character, […] - index == 0 && - length == 1 && - codeUnit == 0x002D - ) { - result += "\\" + string.charAt(index); - continue; - } - - // support for period character in id - if (codeUnit == 0x002E) { - if (string.charAt(0) == "#") { - result += "\\."; - continue; - } - } - - - // If the character is not handled by one of the above rules and is - // greater than or equal to U+0080, is `-` (U+002D) or `_` (U+005F), or - // is in one of the ranges [0-9] (U+0030 to U+0039), [A-Z] (U+0041 to - // U+005A), or [a-z] (U+0061 to U+007A), […] - if ( - codeUnit >= 0x0080 || - codeUnit == 0x002D || - codeUnit == 0x005F || - codeUnit == 35 || // Allow # - codeUnit == 46 || // Allow . - codeUnit >= 0x0030 && codeUnit <= 0x0039 || - codeUnit >= 0x0041 && codeUnit <= 0x005A || - codeUnit >= 0x0061 && codeUnit <= 0x007A - ) { - // the character itself - result += string.charAt(index); - continue; - } - - // Otherwise, the escaped character. - // https://drafts.csswg.org/cssom/#escape-a-character - result += "\\" + string.charAt(index); - - } - return result; - } - - /** - * Creates a new pending promise and provides methods to resolve or reject it. - * From: https://developer.mozilla.org/en-US/docs/Mozilla/JavaScript_code_modules/Promise.jsm/Deferred#backwards_forwards_compatible - * @returns {object} defered - */ - function defer() { - this.resolve = null; - - this.reject = null; - - this.id = UUID(); - - this.promise = new Promise((resolve, reject) => { - this.resolve = resolve; - this.reject = reject; - }); - Object.freeze(this); - } - - const requestIdleCallback = typeof window !== "undefined" && ("requestIdleCallback" in window ? window.requestIdleCallback : window.requestAnimationFrame); - - function CSSValueToString(obj) { - return obj.value + (obj.unit || ""); - } - - function isElement(node) { - return node && node.nodeType === 1; - } - - function isText(node) { - return node && node.nodeType === 3; - } - - function *walk(start, limiter) { - let node = start; - - while (node) { - - yield node; - - if (node.childNodes.length) { - node = node.firstChild; - } else if (node.nextSibling) { - if (limiter && node === limiter) { - node = undefined; - break; - } - node = node.nextSibling; - } else { - while (node) { - node = node.parentNode; - if (limiter && node === limiter) { - node = undefined; - break; - } - if (node && node.nextSibling) { - node = node.nextSibling; - break; - } - - } - } - } - } - - function nodeAfter(node, limiter) { - let after = node; - - if (after.nextSibling) { - if (limiter && node === limiter) { - return; - } - after = after.nextSibling; - } else { - while (after) { - after = after.parentNode; - if (limiter && after === limiter) { - after = undefined; - break; - } - if (after && after.nextSibling) { - after = after.nextSibling; - break; - } - } - } - - return after; - } - - function nodeBefore(node, limiter) { - let before = node; - if (before.previousSibling) { - if (limiter && node === limiter) { - return; - } - before = before.previousSibling; - } else { - while (before) { - before = before.parentNode; - if (limiter && before === limiter) { - before = undefined; - break; - } - if (before && before.previousSibling) { - before = before.previousSibling; - break; - } - } - } - - return before; - } - - function elementAfter(node, limiter) { - let after = nodeAfter(node); - - while (after && after.nodeType !== 1) { - after = nodeAfter(after); - } - - return after; - } - - function rebuildAncestors(node) { - let parent, ancestor; - let ancestors = []; - let added = []; - - let fragment = document.createDocumentFragment(); - - // Gather all ancestors - let element = node; - while(element.parentNode && element.parentNode.nodeType === 1) { - ancestors.unshift(element.parentNode); - element = element.parentNode; - } - - for (var i = 0; i < ancestors.length; i++) { - ancestor = ancestors[i]; - parent = ancestor.cloneNode(false); - - parent.setAttribute("data-split-from", parent.getAttribute("data-ref")); - // ancestor.setAttribute("data-split-to", parent.getAttribute("data-ref")); - - if (parent.hasAttribute("id")) { - let dataID = parent.getAttribute("id"); - parent.setAttribute("data-id", dataID); - parent.removeAttribute("id"); - } - - // This is handled by css :not, but also tidied up here - if (parent.hasAttribute("data-break-before")) { - parent.removeAttribute("data-break-before"); - } - - if (parent.hasAttribute("data-previous-break-after")) { - parent.removeAttribute("data-previous-break-after"); - } - - if (added.length) { - let container = added[added.length-1]; - container.appendChild(parent); - } else { - fragment.appendChild(parent); - } - added.push(parent); - } - - added = undefined; - return fragment; - } - - /* - export function split(bound, cutElement, breakAfter) { - let needsRemoval = []; - let index = indexOf(cutElement); - - if (!breakAfter && index === 0) { - return; - } - - if (breakAfter && index === (cutElement.parentNode.children.length - 1)) { - return; - } - - // Create a fragment with rebuilt ancestors - let fragment = rebuildAncestors(cutElement); - - // Clone cut - if (!breakAfter) { - let clone = cutElement.cloneNode(true); - let ref = cutElement.parentNode.getAttribute('data-ref'); - let parent = fragment.querySelector("[data-ref='" + ref + "']"); - parent.appendChild(clone); - needsRemoval.push(cutElement); - } - - // Remove all after cut - let next = nodeAfter(cutElement, bound); - while (next) { - let clone = next.cloneNode(true); - let ref = next.parentNode.getAttribute('data-ref'); - let parent = fragment.querySelector("[data-ref='" + ref + "']"); - parent.appendChild(clone); - needsRemoval.push(next); - next = nodeAfter(next, bound); - } - - // Remove originals - needsRemoval.forEach((node) => { - if (node) { - node.remove(); - } - }); - - // Insert after bounds - bound.parentNode.insertBefore(fragment, bound.nextSibling); - return [bound, bound.nextSibling]; - } - */ - - function needsBreakBefore(node) { - if( typeof node !== "undefined" && - typeof node.dataset !== "undefined" && - typeof node.dataset.breakBefore !== "undefined" && - (node.dataset.breakBefore === "always" || - node.dataset.breakBefore === "page" || - node.dataset.breakBefore === "left" || - node.dataset.breakBefore === "right" || - node.dataset.breakBefore === "recto" || - node.dataset.breakBefore === "verso") - ) { - return true; - } - - return false; - } - - function needsPreviousBreakAfter(node) { - if( typeof node !== "undefined" && - typeof node.dataset !== "undefined" && - typeof node.dataset.previousBreakAfter !== "undefined" && - (node.dataset.previousBreakAfter === "always" || - node.dataset.previousBreakAfter === "page" || - node.dataset.previousBreakAfter === "left" || - node.dataset.previousBreakAfter === "right" || - node.dataset.previousBreakAfter === "recto" || - node.dataset.previousBreakAfter === "verso") - ) { - return true; - } - - return false; - } - - function needsPageBreak(node) { - if( typeof node !== "undefined" && - typeof node.dataset !== "undefined" && - (node.dataset.page || node.dataset.afterPage) - ) { - return true; - } - - return false; - } - - function *words(node) { - let currentText = node.nodeValue; - let max = currentText.length; - let currentOffset = 0; - let currentLetter; - - let range; - - while(currentOffset < max) { - currentLetter = currentText[currentOffset]; - if (/^[\S\u202F\u00A0]$/.test(currentLetter)) { - if (!range) { - range = document.createRange(); - range.setStart(node, currentOffset); - } - } else { - if (range) { - range.setEnd(node, currentOffset); - yield range; - range = undefined; - } - } - - currentOffset += 1; - } - - if (range) { - range.setEnd(node, currentOffset); - yield range; - range = undefined; - } - } - - function *letters(wordRange) { - let currentText = wordRange.startContainer; - let max = currentText.length; - let currentOffset = wordRange.startOffset; - // let currentLetter; - - let range; - - while(currentOffset < max) { - // currentLetter = currentText[currentOffset]; - range = document.createRange(); - range.setStart(currentText, currentOffset); - range.setEnd(currentText, currentOffset+1); - - yield range; - - currentOffset += 1; - } - } - - function isContainer(node) { - let container; - - if (typeof node.tagName === "undefined") { - return true; - } - - if (node.style.display === "none") { - return false; - } - - switch (node.tagName) { - // Inline - case "A": - case "ABBR": - case "ACRONYM": - case "B": - case "BDO": - case "BIG": - case "BR": - case "BUTTON": - case "CITE": - case "CODE": - case "DFN": - case "EM": - case "I": - case "IMG": - case "INPUT": - case "KBD": - case "LABEL": - case "MAP": - case "OBJECT": - case "Q": - case "SAMP": - case "SCRIPT": - case "SELECT": - case "SMALL": - case "SPAN": - case "STRONG": - case "SUB": - case "SUP": - case "TEXTAREA": - case "TIME": - case "TT": - case "VAR": - case "P": - case "H1": - case "H2": - case "H3": - case "H4": - case "H5": - case "H6": - case "FIGCAPTION": - case "BLOCKQUOTE": - case "PRE": - case "LI": - case "TR": - case "DT": - case "DD": - case "VIDEO": - case "CANVAS": - container = false; - break; - default: - container = true; - } - - return container; - } - - function cloneNode(n, deep=false) { - return n.cloneNode(deep); - } - - function findElement(node, doc) { - const ref = node.getAttribute("data-ref"); - return findRef(ref, doc); - } - - function findRef(ref, doc) { - return doc.querySelector(`[data-ref='${ref}']`); - } - - function validNode(node) { - if (isText(node)) { - return true; - } - - if (isElement(node) && node.dataset.ref) { - return true; - } - - return false; - } - - function prevValidNode(node) { - while (!validNode(node)) { - if (node.previousSibling) { - node = node.previousSibling; - } else { - node = node.parentNode; - } - - if (!node) { - break; - } - } - - return node; - } - - - function indexOf$1(node) { - let parent = node.parentNode; - if (!parent) { - return 0; - } - return Array.prototype.indexOf.call(parent.childNodes, node); - } - - function child(node, index) { - return node.childNodes[index]; - } - - function hasContent(node) { - if (isElement(node)) { - return true; - } else if (isText(node) && - node.textContent.trim().length) { - return true; - } - return false; - } - - function indexOfTextNode(node, parent) { - if (!isText(node)) { - return -1; - } - let nodeTextContent = node.textContent; - let child; - let index = -1; - for (var i = 0; i < parent.childNodes.length; i++) { - child = parent.childNodes[i]; - if (child.nodeType === 3) { - let text = parent.childNodes[i].textContent; - if (text.includes(nodeTextContent)) { - index = i; - break; - } - } - } - - return index; - } - - const MAX_CHARS_PER_BREAK = 1500; - - /** - * Layout - * @class - */ - class Layout { - - constructor(element, hooks, options) { - this.element = element; - - this.bounds = this.element.getBoundingClientRect(); - - if (hooks) { - this.hooks = hooks; - } else { - this.hooks = {}; - this.hooks.layout = new Hook(); - this.hooks.renderNode = new Hook(); - this.hooks.layoutNode = new Hook(); - this.hooks.beforeOverflow = new Hook(); - this.hooks.onOverflow = new Hook(); - this.hooks.onBreakToken = new Hook(); - } - - this.settings = options || {}; - - this.maxChars = this.settings.maxChars || MAX_CHARS_PER_BREAK; - } - - async renderTo(wrapper, source, breakToken, bounds=this.bounds) { - let start = this.getStart(source, breakToken); - let walker = walk(start, source); - - let node; - let done; - let next; - - let hasRenderedContent = false; - let newBreakToken; - - let length = 0; - - while (!done && !newBreakToken) { - next = walker.next(); - node = next.value; - done = next.done; - - if (!node) { - this.hooks && this.hooks.layout.trigger(wrapper, this); - - let imgs = wrapper.querySelectorAll("img"); - if (imgs.length) { - await this.waitForImages(imgs); - } - - newBreakToken = this.findBreakToken(wrapper, source, bounds); - return newBreakToken; - } - - this.hooks && this.hooks.layoutNode.trigger(node); - - // Check if the rendered element has a break set - if (hasRenderedContent && this.shouldBreak(node)) { - - this.hooks && this.hooks.layout.trigger(wrapper, this); - - let imgs = wrapper.querySelectorAll("img"); - if (imgs.length) { - await this.waitForImages(imgs); - } - - newBreakToken = this.findBreakToken(wrapper, source, bounds); - - if (!newBreakToken) { - newBreakToken = this.breakAt(node); - } - - length = 0; - - break; - } - - // Should the Node be a shallow or deep clone - let shallow = isContainer(node); - - let rendered = this.append(node, wrapper, breakToken, shallow); - - length += rendered.textContent.length; - - // Check if layout has content yet - if (!hasRenderedContent) { - hasRenderedContent = hasContent(node); - } - - // Skip to the next node if a deep clone was rendered - if (!shallow) { - walker = walk(nodeAfter(node, source), source); - } - - // Only check x characters - if (length >= this.maxChars) { - - this.hooks && this.hooks.layout.trigger(wrapper, this); - - let imgs = wrapper.querySelectorAll("img"); - if (imgs.length) { - await this.waitForImages(imgs); - } - - newBreakToken = this.findBreakToken(wrapper, source, bounds); - - if (newBreakToken) { - length = 0; - } - } - - } - - return newBreakToken; - } - - breakAt(node, offset=0) { - return { - node, - offset - }; - } - - shouldBreak(node) { - let previousSibling = node.previousSibling; - let parentNode = node.parentNode; - let parentBreakBefore = needsBreakBefore(node) && parentNode && !previousSibling && needsBreakBefore(parentNode); - let doubleBreakBefore; - - if (parentBreakBefore) { - doubleBreakBefore = node.dataset.breakBefore === parentNode.dataset.breakBefore; - } - - return !doubleBreakBefore && needsBreakBefore(node) || needsPreviousBreakAfter(node) || needsPageBreak(node); - } - - getStart(source, breakToken) { - let start; - let node = breakToken && breakToken.node; - - if (node) { - start = node; - } else { - start = source.firstChild; - } - - return start; - } - - append(node, dest, breakToken, shallow=true, rebuild=true) { - - let clone = cloneNode(node, !shallow); - - if (node.parentNode && isElement(node.parentNode)) { - let parent = findElement(node.parentNode, dest); - // Rebuild chain - if (parent) { - parent.appendChild(clone); - } else if (rebuild) { - let fragment = rebuildAncestors(node); - parent = findElement(node.parentNode, fragment); - if (!parent) { - dest.appendChild(clone); - } else if (breakToken && isText(breakToken.node) && breakToken.offset > 0) { - clone.textContent = clone.textContent.substring(breakToken.offset); - parent.appendChild(clone); - } else { - parent.appendChild(clone); - } - - dest.appendChild(fragment); - } else { - dest.appendChild(clone); - } - - - } else { - dest.appendChild(clone); - } - - let nodeHooks = this.hooks.renderNode.triggerSync(clone, node); - nodeHooks.forEach((newNode) => { - if (typeof newNode != "undefined") { - clone = newNode; - } - }); - - return clone; - } - - async waitForImages(imgs) { - let results = Array.from(imgs).map(async (img) => { - return this.awaitImageLoaded(img); - }); - await Promise.all(results); - } - - async awaitImageLoaded(image) { - return new Promise(resolve => { - if (image.complete !== true) { - image.onload = function() { - let { width, height } = window.getComputedStyle(image); - resolve(width, height); - }; - image.onerror = function(e) { - let { width, height } = window.getComputedStyle(image); - resolve(width, height, e); - }; - } else { - let { width, height } = window.getComputedStyle(image); - resolve(width, height); - } - }); - } - - avoidBreakInside(node, limiter) { - let breakNode; - - if (node === limiter) { - return; - } - - while (node.parentNode) { - node = node.parentNode; - - if (node === limiter) { - break; - } - - if(window.getComputedStyle(node)["break-inside"] === "avoid") { - breakNode = node; - break; - } - - } - return breakNode; - } - - createBreakToken(overflow, rendered, source) { - let container = overflow.startContainer; - let offset = overflow.startOffset; - let node, renderedNode, parent, index, temp; - - if (isElement(container)) { - temp = child(container, offset); - - if (isElement(temp)) { - renderedNode = findElement(temp, rendered); - - if (!renderedNode) { - // Find closest element with data-ref - renderedNode = findElement(prevValidNode(temp), rendered); - // Check if temp is the last rendered node at its level. - if (!temp.nextSibling) { - // We need to ensure that the previous sibling of temp is fully rendered. - const renderedNodeFromSource = findElement(renderedNode, source); - const walker = document.createTreeWalker(renderedNodeFromSource, NodeFilter.SHOW_ELEMENT); - const lastChildOfRenderedNodeFromSource = walker.lastChild(); - const lastChildOfRenderedNodeMatchingFromRendered = findElement(lastChildOfRenderedNodeFromSource, rendered); - // Check if we found that the last child in source - if (!lastChildOfRenderedNodeMatchingFromRendered) { - // Pending content to be rendered before virtual break token - return; - } - // Otherwise we will return a break token as per below - } - // renderedNode is actually the last unbroken box that does not overflow. - // Break Token is therefore the next sibling of renderedNode within source node. - node = findElement(renderedNode, source).nextSibling; - offset = 0; - } else { - node = findElement(renderedNode, source); - offset = 0; - } - } else { - renderedNode = findElement(container, rendered); - - if (!renderedNode) { - renderedNode = findElement(prevValidNode(container), rendered); - } - - parent = findElement(renderedNode, source); - index = indexOfTextNode(temp, parent); - node = child(parent, index); - offset = 0; - } - } else { - renderedNode = findElement(container.parentNode, rendered); - - if (!renderedNode) { - renderedNode = findElement(prevValidNode(container.parentNode), rendered); - } - - parent = findElement(renderedNode, source); - index = indexOfTextNode(container, parent); - - if (index === -1) { - return; - } - - node = child(parent, index); - - offset += node.textContent.indexOf(container.textContent); - } - - if (!node) { - return; - } - - return { - node, - offset - }; - - } - - findBreakToken(rendered, source, bounds=this.bounds, extract=true) { - let overflow = this.findOverflow(rendered, bounds); - let breakToken, breakLetter; - - let overflowHooks = this.hooks.onOverflow.triggerSync(overflow, rendered, bounds, this); - overflowHooks.forEach((newOverflow) => { - if (typeof newOverflow != "undefined") { - overflow = newOverflow; - } - }); - - if (overflow) { - breakToken = this.createBreakToken(overflow, rendered, source); - // breakToken is nullable - if (breakToken && breakToken["node"] && breakToken["offset"] && breakToken["node"].textContent) { - breakLetter = breakToken["node"].textContent.charAt(breakToken["offset"]); - } else { - breakLetter = undefined; - } - - let breakHooks = this.hooks.onBreakToken.triggerSync(breakToken, overflow, rendered, this); - breakHooks.forEach((newToken) => { - if (typeof newToken != "undefined") { - breakToken = newToken; - } - }); - - - if (breakToken && breakToken.node && extract) { - this.removeOverflow(overflow, breakLetter); - } - - } - return breakToken; - } - - hasOverflow(element, bounds=this.bounds) { - let constrainingElement = element && element.parentNode; // this gets the element, instead of the wrapper for the width workaround - let { width } = element.getBoundingClientRect(); - let scrollWidth = constrainingElement ? constrainingElement.scrollWidth : 0; - return Math.max(Math.floor(width), scrollWidth) > Math.round(bounds.width); - } - - findOverflow(rendered, bounds=this.bounds) { - if (!this.hasOverflow(rendered, bounds)) return; - - let start = Math.round(bounds.left); - let end = Math.round(bounds.right); - let range; - - let walker = walk(rendered.firstChild, rendered); - - // Find Start - let next, done, node, offset, skip, breakAvoid, prev, br; - while (!done) { - next = walker.next(); - done = next.done; - node = next.value; - skip = false; - breakAvoid = false; - prev = undefined; - br = undefined; - - if (node) { - let pos = getBoundingClientRect(node); - let left = Math.round(pos.left); - let right = Math.floor(pos.right); - - if (!range && left >= end) { - // Check if it is a float - let isFloat = false; - - if (isElement(node) ) { - let styles = window.getComputedStyle(node); - isFloat = styles.getPropertyValue("float") !== "none"; - skip = styles.getPropertyValue("break-inside") === "avoid"; - breakAvoid = node.dataset.breakBefore === "avoid" || node.dataset.previousBreakAfter === "avoid"; - prev = breakAvoid && nodeBefore(node, rendered); - br = node.tagName === "BR" || node.tagName === "WBR"; - } - - if (prev) { - range = document.createRange(); - range.setStartBefore(prev); - break; - } - - if (!br && !isFloat && isElement(node)) { - range = document.createRange(); - range.setStartBefore(node); - break; - } - - if (isText(node) && node.textContent.trim().length) { - range = document.createRange(); - range.setStartBefore(node); - break; - } - - } - - if (!range && isText(node) && - node.textContent.trim().length && - window.getComputedStyle(node.parentNode)["break-inside"] !== "avoid") { - - let rects = getClientRects(node); - let rect; - left = 0; - for (var i = 0; i != rects.length; i++) { - rect = rects[i]; - if (rect.width > 0 && (!left || rect.left > left)) { - left = rect.left; - } - } - - if(left >= end) { - range = document.createRange(); - offset = this.textBreak(node, start, end); - if (!offset) { - range = undefined; - } else { - range.setStart(node, offset); - } - break; - } - } - - // Skip children - if (skip || right <= end) { - next = nodeAfter(node, rendered); - if (next) { - walker = walk(next, rendered); - } - - } - - } - } - - // Find End - if (range) { - range.setEndAfter(rendered.lastChild); - return range; - } - - } - - findEndToken(rendered, source, bounds=this.bounds) { - if (rendered.childNodes.length === 0) { - return; - } - - let lastChild = rendered.lastChild; - - let lastNodeIndex; - while (lastChild && lastChild.lastChild) { - if (!validNode(lastChild)) { - // Only get elements with refs - lastChild = lastChild.previousSibling; - } else if(!validNode(lastChild.lastChild)) { - // Deal with invalid dom items - lastChild = prevValidNode(lastChild.lastChild); - break; - } else { - lastChild = lastChild.lastChild; - } - } - - if (isText(lastChild)) { - - if (lastChild.parentNode.dataset.ref) { - lastNodeIndex = indexOf$1(lastChild); - lastChild = lastChild.parentNode; - } else { - lastChild = lastChild.previousSibling; - } - } - - let original = findElement(lastChild, source); - - if (lastNodeIndex) { - original = original.childNodes[lastNodeIndex]; - } - - let after = nodeAfter(original); - - return this.breakAt(after); - } - - textBreak(node, start, end) { - let wordwalker = words(node); - let left = 0; - let right = 0; - let word, next, done, pos; - let offset; - while (!done) { - next = wordwalker.next(); - word = next.value; - done = next.done; - - if (!word) { - break; - } - - pos = getBoundingClientRect(word); - - left = Math.floor(pos.left); - right = Math.floor(pos.right); - - if (left >= end) { - offset = word.startOffset; - break; - } - - if (right > end) { - let letterwalker = letters(word); - let letter, nextLetter, doneLetter; - - while (!doneLetter) { - nextLetter = letterwalker.next(); - letter = nextLetter.value; - doneLetter = nextLetter.done; - - if (!letter) { - break; - } - - pos = getBoundingClientRect(letter); - left = Math.floor(pos.left); - - if (left >= end) { - offset = letter.startOffset; - done = true; - - break; - } - } - } - - } - - return offset; - } - - removeOverflow(overflow, breakLetter) { - let {startContainer} = overflow; - let extracted = overflow.extractContents(); - - this.hyphenateAtBreak(startContainer, breakLetter); - - return extracted; - } - - hyphenateAtBreak(startContainer, breakLetter) { - if (isText(startContainer)) { - let startText = startContainer.textContent; - let prevLetter = startText[startText.length-1]; - - // Add a hyphen if previous character is a letter or soft hyphen - if ( - (breakLetter && /^\w|\u00AD$/.test(prevLetter) && /^\w|\u00AD$/.test(breakLetter)) || - (!breakLetter && /^\w|\u00AD$/.test(prevLetter)) - ) { - startContainer.parentNode.classList.add("pagedjs_hyphen"); - startContainer.textContent += this.settings.hyphenGlyph || "\u2011"; - } - } - } - } - - eventEmitter(Layout.prototype); - - /** - * Render a page - * @class - */ - class Page { - constructor(pagesArea, pageTemplate, blank, hooks) { - this.pagesArea = pagesArea; - this.pageTemplate = pageTemplate; - this.blank = blank; - - this.width = undefined; - this.height = undefined; - - this.hooks = hooks; - - // this.element = this.create(this.pageTemplate); - } - - create(template, after) { - //let documentFragment = document.createRange().createContextualFragment( TEMPLATE ); - //let page = documentFragment.children[0]; - let clone = document.importNode(this.pageTemplate.content, true); - - let page, index; - if (after) { - this.pagesArea.insertBefore(clone, after.nextElementSibling); - index = Array.prototype.indexOf.call(this.pagesArea.children, after.nextElementSibling); - page = this.pagesArea.children[index]; - } else { - this.pagesArea.appendChild(clone); - page = this.pagesArea.lastChild; - } - - let pagebox = page.querySelector(".pagedjs_pagebox"); - let area = page.querySelector(".pagedjs_page_content"); - - - let size = area.getBoundingClientRect(); - - - area.style.columnWidth = Math.round(size.width) + "px"; - area.style.columnGap = "calc(var(--pagedjs-margin-right) + var(--pagedjs-margin-left))"; - // area.style.overflow = "scroll"; - - this.width = Math.round(size.width); - this.height = Math.round(size.height); - - this.element = page; - this.pagebox = pagebox; - this.area = area; - - return page; - } - - createWrapper() { - let wrapper = document.createElement("div"); - - this.area.appendChild(wrapper); - - this.wrapper = wrapper; - - return wrapper; - } - - index(pgnum) { - this.position = pgnum; - - let page = this.element; - // let pagebox = this.pagebox; - - let index = pgnum+1; - - let id = `page-${index}`; - - this.id = id; - - // page.dataset.pageNumber = index; - - page.dataset.pageNumber = index; - page.setAttribute("id", id); - - if (this.name) { - page.classList.add("pagedjs_" + this.name + "_page"); - } - - if (this.blank) { - page.classList.add("pagedjs_blank_page"); - } - - if (pgnum === 0) { - page.classList.add("pagedjs_first_page"); - } - - if (pgnum % 2 !== 1) { - page.classList.remove("pagedjs_left_page"); - page.classList.add("pagedjs_right_page"); - } else { - page.classList.remove("pagedjs_right_page"); - page.classList.add("pagedjs_left_page"); - } - } - - /* - size(width, height) { - if (width === this.width && height === this.height) { - return; - } - this.width = width; - this.height = height; - - this.element.style.width = Math.round(width) + "px"; - this.element.style.height = Math.round(height) + "px"; - this.element.style.columnWidth = Math.round(width) + "px"; - } - */ - - async layout(contents, breakToken, maxChars) { - - this.clear(); - - this.startToken = breakToken; - - this.layoutMethod = new Layout(this.area, this.hooks, maxChars); - - let newBreakToken = await this.layoutMethod.renderTo(this.wrapper, contents, breakToken); - - this.addListeners(contents); - - this.endToken = newBreakToken; - - return newBreakToken; - } - - async append(contents, breakToken) { - - if (!this.layoutMethod) { - return this.layout(contents, breakToken); - } - - let newBreakToken = await this.layoutMethod.renderTo(this.wrapper, contents, breakToken); - - this.endToken = newBreakToken; - - return newBreakToken; - } - - getByParent(ref, entries) { - let e; - for (var i = 0; i < entries.length; i++) { - e = entries[i]; - if(e.dataset.ref === ref) { - return e; - } - } - } - - onOverflow(func) { - this._onOverflow = func; - } - - onUnderflow(func) { - this._onUnderflow = func; - } - - clear() { - this.removeListeners(); - this.wrapper && this.wrapper.remove(); - this.createWrapper(); - } - - addListeners(contents) { - if (typeof ResizeObserver !== "undefined") { - this.addResizeObserver(contents); - } else { - this._checkOverflowAfterResize = this.checkOverflowAfterResize.bind(this, contents); - this.element.addEventListener("overflow", this._checkOverflowAfterResize, false); - this.element.addEventListener("underflow", this._checkOverflowAfterResize, false); - } - // TODO: fall back to mutation observer? - - this._onScroll = function() { - if(this.listening) { - this.element.scrollLeft = 0; - } - }.bind(this); - - // Keep scroll left from changing - this.element.addEventListener("scroll", this._onScroll); - - this.listening = true; - - return true; - } - - removeListeners() { - this.listening = false; - - if (typeof ResizeObserver !== "undefined" && this.ro) { - this.ro.disconnect(); - } else if (this.element) { - this.element.removeEventListener("overflow", this._checkOverflowAfterResize, false); - this.element.removeEventListener("underflow", this._checkOverflowAfterResize, false); - } - - this.element &&this.element.removeEventListener("scroll", this._onScroll); - - } - - addResizeObserver(contents) { - let wrapper = this.wrapper; - let prevHeight = wrapper.getBoundingClientRect().height; - this.ro = new ResizeObserver( entries => { - - if (!this.listening) { - return; - } - - for (let entry of entries) { - const cr = entry.contentRect; - - if (cr.height > prevHeight) { - this.checkOverflowAfterResize(contents); - prevHeight = wrapper.getBoundingClientRect().height; - } else if (cr.height < prevHeight ) { // TODO: calc line height && (prevHeight - cr.height) >= 22 - this.checkUnderflowAfterResize(contents); - prevHeight = cr.height; - } - } - }); - - this.ro.observe(wrapper); - } - - checkOverflowAfterResize(contents) { - if (!this.listening || !this.layoutMethod) { - return; - } - - let newBreakToken = this.layoutMethod.findBreakToken(this.wrapper, contents); - - if (newBreakToken) { - this.endToken = newBreakToken; - this._onOverflow && this._onOverflow(newBreakToken); - } - } - - checkUnderflowAfterResize(contents) { - if (!this.listening || !this.layoutMethod) { - return; - } - - let endToken = this.layoutMethod.findEndToken(this.wrapper, contents); - - // let newBreakToken = this.layoutMethod.findBreakToken(this.wrapper, contents); - - if (endToken) { - this._onUnderflow && this._onUnderflow(endToken); - } - } - - - destroy() { - this.removeListeners(); - - this.element.remove(); - - this.element = undefined; - this.wrapper = undefined; - } - } - - eventEmitter(Page.prototype); - - /** - * Render a flow of text offscreen - * @class - */ - class ContentParser { - - constructor(content, cb) { - if (content && content.nodeType) { - // handle dom - this.dom = this.add(content); - } else if (typeof content === "string") { - this.dom = this.parse(content); - } - - return this.dom; - } - - parse(markup, mime) { - let range = document.createRange(); - let fragment = range.createContextualFragment(markup); - - this.addRefs(fragment); - this.removeEmpty(fragment); - - return fragment; - } - - add(contents) { - // let fragment = document.createDocumentFragment(); - // - // let children = [...contents.childNodes]; - // for (let child of children) { - // let clone = child.cloneNode(true); - // fragment.appendChild(clone); - // } - - this.addRefs(contents); - this.removeEmpty(contents); - - return contents; - } - - addRefs(content) { - var treeWalker = document.createTreeWalker( - content, - NodeFilter.SHOW_ELEMENT, - { acceptNode: function(node) { return NodeFilter.FILTER_ACCEPT; } }, - false - ); - - let node = treeWalker.nextNode(); - while(node) { - - if (!node.hasAttribute("data-ref")) { - let uuid = UUID(); - node.setAttribute("data-ref", uuid); - } - - if (node.id) { - node.setAttribute("data-id", node.id); - } - - // node.setAttribute("data-children", node.childNodes.length); - - // node.setAttribute("data-text", node.textContent.trim().length); - node = treeWalker.nextNode(); - } - } - - removeEmpty(content) { - var treeWalker = document.createTreeWalker( - content, - NodeFilter.SHOW_TEXT, - { acceptNode: function(node) { - // Only remove more than a single space - if (node.textContent.length > 1 && !node.textContent.trim()) { - - // Don't touch whitespace if text is preformated - let parent = node.parentNode; - let pre = isElement(parent) && parent.closest("pre"); - if (pre) { - return NodeFilter.FILTER_REJECT; - } - - return NodeFilter.FILTER_ACCEPT; - } else { - return NodeFilter.FILTER_REJECT; - } - } }, - false - ); - - let node; - let current; - node = treeWalker.nextNode(); - while(node) { - current = node; - node = treeWalker.nextNode(); - // if (!current.nextSibling || (current.nextSibling && current.nextSibling.nodeType === 1)) { - current.parentNode.removeChild(current); - // } - } - } - - find(ref) { - return this.refs[ref]; - } - - // isWrapper(element) { - // return wrappersRegex.test(element.nodeName); - // } - - isText(node) { - return node.tagName === "TAG"; - } - - isElement(node) { - return node.nodeType === 1; - } - - hasChildren(node) { - return node.childNodes && node.childNodes.length; - } - - - destroy() { - this.refs = undefined; - this.dom = undefined; - } - } - - /** - * Queue for handling tasks one at a time - * @class - * @param {scope} context what this will resolve to in the tasks - */ - class Queue { - constructor(context){ - this._q = []; - this.context = context; - this.tick = requestAnimationFrame; - this.running = false; - this.paused = false; - } - - /** - * Add an item to the queue - * @return {Promise} enqueued - */ - enqueue() { - var deferred, promise; - var queued; - var task = [].shift.call(arguments); - var args = arguments; - - // Handle single args without context - // if(args && !Array.isArray(args)) { - // args = [args]; - // } - if(!task) { - throw new Error("No Task Provided"); - } - - if(typeof task === "function"){ - - deferred = new defer(); - promise = deferred.promise; - - queued = { - "task" : task, - "args" : args, - //"context" : context, - "deferred" : deferred, - "promise" : promise - }; - - } else { - // Task is a promise - queued = { - "promise" : task - }; - - } - - this._q.push(queued); - - // Wait to start queue flush - if (this.paused == false && !this.running) { - this.run(); - } - - return queued.promise; - } - - /** - * Run one item - * @return {Promise} dequeued - */ - dequeue(){ - var inwait, task, result; - - if(this._q.length && !this.paused) { - inwait = this._q.shift(); - task = inwait.task; - if(task){ - // console.log(task) - - result = task.apply(this.context, inwait.args); - - if(result && typeof result["then"] === "function") { - // Task is a function that returns a promise - return result.then(function(){ - inwait.deferred.resolve.apply(this.context, arguments); - }.bind(this), function() { - inwait.deferred.reject.apply(this.context, arguments); - }.bind(this)); - } else { - // Task resolves immediately - inwait.deferred.resolve.apply(this.context, result); - return inwait.promise; - } - - - - } else if(inwait.promise) { - // Task is a promise - return inwait.promise; - } - - } else { - inwait = new defer(); - inwait.deferred.resolve(); - return inwait.promise; - } - - } - - // Run All Immediately - dump(){ - while(this._q.length) { - this.dequeue(); - } - } - - /** - * Run all tasks sequentially, at convince - * @return {Promise} all run - */ - run(){ - - if(!this.running){ - this.running = true; - this.defered = new defer(); - } - - this.tick.call(window, () => { - - if(this._q.length) { - - this.dequeue() - .then(function(){ - this.run(); - }.bind(this)); - - } else { - this.defered.resolve(); - this.running = undefined; - } - - }); - - // Unpause - if(this.paused == true) { - this.paused = false; - } - - return this.defered.promise; - } - - /** - * Flush all, as quickly as possible - * @return {Promise} ran - */ - flush(){ - - if(this.running){ - return this.running; - } - - if(this._q.length) { - this.running = this.dequeue() - .then(function(){ - this.running = undefined; - return this.flush(); - }.bind(this)); - - return this.running; - } - - } - - /** - * Clear all items in wait - * @return {void} - */ - clear(){ - this._q = []; - } - - /** - * Get the number of tasks in the queue - * @return {number} tasks - */ - length(){ - return this._q.length; - } - - /** - * Pause a running queue - * @return {void} - */ - pause(){ - this.paused = true; - } - - /** - * End the queue - * @return {void} - */ - stop(){ - this._q = []; - this.running = false; - this.paused = true; - } - } - - const TEMPLATE = ` -
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
`; - - /** - * Chop up text into flows - * @class - */ - class Chunker { - constructor(content, renderTo, options) { - // this.preview = preview; - - this.settings = options || {}; - - this.hooks = {}; - this.hooks.beforeParsed = new Hook(this); - this.hooks.afterParsed = new Hook(this); - this.hooks.beforePageLayout = new Hook(this); - this.hooks.layout = new Hook(this); - this.hooks.renderNode = new Hook(this); - this.hooks.layoutNode = new Hook(this); - this.hooks.onOverflow = new Hook(this); - this.hooks.onBreakToken = new Hook(); - this.hooks.afterPageLayout = new Hook(this); - this.hooks.afterRendered = new Hook(this); - - this.pages = []; - this.total = 0; - - this.q = new Queue(this); - this.stopped = false; - this.rendered = false; - - this.content = content; - - this.charsPerBreak = []; - this.maxChars; - - if (content) { - this.flow(content, renderTo); - } - } - - setup(renderTo) { - this.pagesArea = document.createElement("div"); - this.pagesArea.classList.add("pagedjs_pages"); - - if (renderTo) { - renderTo.appendChild(this.pagesArea); - } else { - document.querySelector("body").appendChild(this.pagesArea); - } - - this.pageTemplate = document.createElement("template"); - this.pageTemplate.innerHTML = TEMPLATE; - - } - - async flow(content, renderTo) { - let parsed; - - await this.hooks.beforeParsed.trigger(content, this); - - parsed = new ContentParser(content); - - this.source = parsed; - this.breakToken = undefined; - - if (this.pagesArea && this.pageTemplate) { - this.q.clear(); - this.removePages(); - } else { - this.setup(renderTo); - } - - this.emit("rendering", content); - - await this.hooks.afterParsed.trigger(parsed, this); - - await this.loadFonts(); - - let rendered = await this.render(parsed, this.breakToken); - while (rendered.canceled) { - this.start(); - rendered = await this.render(parsed, this.breakToken); - } - - this.rendered = true; - this.pagesArea.style.setProperty("--pagedjs-page-count", this.total); - - await this.hooks.afterRendered.trigger(this.pages, this); - - this.emit("rendered", this.pages); - - - - return this; - } - - // oversetPages() { - // let overset = []; - // for (let i = 0; i < this.pages.length; i++) { - // let page = this.pages[i]; - // if (page.overset) { - // overset.push(page); - // // page.overset = false; - // } - // } - // return overset; - // } - // - // async handleOverset(parsed) { - // let overset = this.oversetPages(); - // if (overset.length) { - // console.log("overset", overset); - // let index = this.pages.indexOf(overset[0]) + 1; - // console.log("INDEX", index); - // - // // Remove pages - // // this.removePages(index); - // - // // await this.render(parsed, overset[0].overset); - // - // // return this.handleOverset(parsed); - // } - // } - - async render(parsed, startAt) { - let renderer = this.layout(parsed, startAt, this.settings); - - let done = false; - let result; - - while (!done) { - result = await this.q.enqueue(() => { return this.renderAsync(renderer); }); - done = result.done; - } - - return result; - } - - start() { - this.rendered = false; - this.stopped = false; - } - - stop() { - this.stopped = true; - // this.q.clear(); - } - - renderOnIdle(renderer) { - return new Promise(resolve => { - requestIdleCallback(async () => { - if (this.stopped) { - return resolve({ done: true, canceled: true }); - } - let result = await renderer.next(); - if (this.stopped) { - resolve({ done: true, canceled: true }); - } else { - resolve(result); - } - }); - }); - } - - async renderAsync(renderer) { - if (this.stopped) { - return { done: true, canceled: true }; - } - let result = await renderer.next(); - if (this.stopped) { - return { done: true, canceled: true }; - } else { - return result; - } - } - - async handleBreaks(node) { - let currentPage = this.total + 1; - let currentPosition = currentPage % 2 === 0 ? "left" : "right"; - // TODO: Recto and Verso should reverse for rtl languages - let currentSide = currentPage % 2 === 0 ? "verso" : "recto"; - let previousBreakAfter; - let breakBefore; - let page; - - if (currentPage === 1) { - return; - } - - if (node && - typeof node.dataset !== "undefined" && - typeof node.dataset.previousBreakAfter !== "undefined") { - previousBreakAfter = node.dataset.previousBreakAfter; - } - - if (node && - typeof node.dataset !== "undefined" && - typeof node.dataset.breakBefore !== "undefined") { - breakBefore = node.dataset.breakBefore; - } - - if( previousBreakAfter && - (previousBreakAfter === "left" || previousBreakAfter === "right") && - previousBreakAfter !== currentPosition) { - page = this.addPage(true); - } else if( previousBreakAfter && - (previousBreakAfter === "verso" || previousBreakAfter === "recto") && - previousBreakAfter !== currentSide) { - page = this.addPage(true); - } else if( breakBefore && - (breakBefore === "left" || breakBefore === "right") && - breakBefore !== currentPosition) { - page = this.addPage(true); - } else if( breakBefore && - (breakBefore === "verso" || breakBefore === "recto") && - breakBefore !== currentSide) { - page = this.addPage(true); - } - - if (page) { - await this.hooks.beforePageLayout.trigger(page, undefined, undefined, this); - this.emit("page", page); - // await this.hooks.layout.trigger(page.element, page, undefined, this); - await this.hooks.afterPageLayout.trigger(page.element, page, undefined, this); - this.emit("renderedPage", page); - } - } - - async *layout(content, startAt) { - let breakToken = startAt || false; - - while (breakToken !== undefined && ( true)) { - - if (breakToken && breakToken.node) { - await this.handleBreaks(breakToken.node); - } else { - await this.handleBreaks(content.firstChild); - } - - let page = this.addPage(); - - await this.hooks.beforePageLayout.trigger(page, content, breakToken, this); - this.emit("page", page); - - // Layout content in the page, starting from the breakToken - breakToken = await page.layout(content, breakToken, this.maxChars); - - await this.hooks.afterPageLayout.trigger(page.element, page, breakToken, this); - this.emit("renderedPage", page); - - this.recoredCharLength(page.wrapper.textContent.length); - - yield breakToken; - - // Stop if we get undefined, showing we have reached the end of the content - } - - - } - - recoredCharLength(length) { - if (length === 0) { - return; - } - - this.charsPerBreak.push(length); - - // Keep the length of the last few breaks - if (this.charsPerBreak.length > 4) { - this.charsPerBreak.shift(); - } - - this.maxChars = this.charsPerBreak.reduce((a, b) => a + b, 0) / (this.charsPerBreak.length); - } - - removePages(fromIndex=0) { - - if (fromIndex >= this.pages.length) { - return; - } - - // Remove pages - for (let i = fromIndex; i < this.pages.length; i++) { - this.pages[i].destroy(); - } - - if (fromIndex > 0) { - this.pages.splice(fromIndex); - } else { - this.pages = []; - } - - this.total = this.pages.length; - } - - addPage(blank) { - let lastPage = this.pages[this.pages.length - 1]; - // Create a new page from the template - let page = new Page(this.pagesArea, this.pageTemplate, blank, this.hooks); - - this.pages.push(page); - - // Create the pages - page.create(undefined, lastPage && lastPage.element); - - page.index(this.total); - - if (!blank) { - // Listen for page overflow - page.onOverflow((overflowToken) => { - console.warn("overflow on", page.id, overflowToken); - - // Only reflow while rendering - if (this.rendered) { - return; - } - - let index = this.pages.indexOf(page) + 1; - - // Stop the rendering - this.stop(); - - // Set the breakToken to resume at - this.breakToken = overflowToken; - - // Remove pages - this.removePages(index); - - if (this.rendered === true) { - this.rendered = false; - - this.q.enqueue(async () => { - - this.start(); - - await this.render(this.source, this.breakToken); - - this.rendered = true; - - }); - } - - - }); - - page.onUnderflow((overflowToken) => { - // console.log("underflow on", page.id, overflowToken); - - // page.append(this.source, overflowToken); - - }); - } - - this.total = this.pages.length; - - return page; - } - /* - insertPage(index, blank) { - let lastPage = this.pages[index]; - // Create a new page from the template - let page = new Page(this.pagesArea, this.pageTemplate, blank, this.hooks); - - let total = this.pages.splice(index, 0, page); - - // Create the pages - page.create(undefined, lastPage && lastPage.element); - - page.index(index + 1); - - for (let i = index + 2; i < this.pages.length; i++) { - this.pages[i].index(i); - } - - if (!blank) { - // Listen for page overflow - page.onOverflow((overflowToken) => { - if (total < this.pages.length) { - this.pages[total].layout(this.source, overflowToken); - } else { - let newPage = this.addPage(); - newPage.layout(this.source, overflowToken); - } - }); - - page.onUnderflow(() => { - // console.log("underflow on", page.id); - }); - } - - this.total += 1; - - return page; - } - */ - - - - loadFonts() { - let fontPromises = []; - (document.fonts || []).forEach((fontFace) => { - if (fontFace.status !== "loaded") { - let fontLoaded = fontFace.load().then((r) => { - return fontFace.family; - }, (r) => { - console.warn("Failed to preload font-family:", fontFace.family); - return fontFace.family; - }); - fontPromises.push(fontLoaded); - } - }); - return Promise.all(fontPromises).catch((err) => { - console.warn(err); - }); - } - - destroy() { - this.pagesArea.remove(); - this.pageTemplate.remove(); - } - - } - - eventEmitter(Chunker.prototype); - - // - // list - // ┌──────┐ - // ┌──────────────┼─head │ - // │ │ tail─┼──────────────┐ - // │ └──────┘ │ - // ▼ ▼ - // item item item item - // ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ - // null ◀──┼─prev │◀───┼─prev │◀───┼─prev │◀───┼─prev │ - // │ next─┼───▶│ next─┼───▶│ next─┼───▶│ next─┼──▶ null - // ├──────┤ ├──────┤ ├──────┤ ├──────┤ - // │ data │ │ data │ │ data │ │ data │ - // └──────┘ └──────┘ └──────┘ └──────┘ - // - - function createItem(data) { - return { - prev: null, - next: null, - data: data - }; - } - - function allocateCursor(node, prev, next) { - var cursor; - - if (cursors !== null) { - cursor = cursors; - cursors = cursors.cursor; - cursor.prev = prev; - cursor.next = next; - cursor.cursor = node.cursor; - } else { - cursor = { - prev: prev, - next: next, - cursor: node.cursor - }; - } - - node.cursor = cursor; - - return cursor; - } - - function releaseCursor(node) { - var cursor = node.cursor; - - node.cursor = cursor.cursor; - cursor.prev = null; - cursor.next = null; - cursor.cursor = cursors; - cursors = cursor; - } - - var cursors = null; - var List = function() { - this.cursor = null; - this.head = null; - this.tail = null; - }; - - List.createItem = createItem; - List.prototype.createItem = createItem; - - List.prototype.updateCursors = function(prevOld, prevNew, nextOld, nextNew) { - var cursor = this.cursor; - - while (cursor !== null) { - if (cursor.prev === prevOld) { - cursor.prev = prevNew; - } - - if (cursor.next === nextOld) { - cursor.next = nextNew; - } - - cursor = cursor.cursor; - } - }; - - List.prototype.getSize = function() { - var size = 0; - var cursor = this.head; - - while (cursor) { - size++; - cursor = cursor.next; - } - - return size; - }; - - List.prototype.fromArray = function(array) { - var cursor = null; - - this.head = null; - - for (var i = 0; i < array.length; i++) { - var item = createItem(array[i]); - - if (cursor !== null) { - cursor.next = item; - } else { - this.head = item; - } - - item.prev = cursor; - cursor = item; - } - - this.tail = cursor; - - return this; - }; - - List.prototype.toArray = function() { - var cursor = this.head; - var result = []; - - while (cursor) { - result.push(cursor.data); - cursor = cursor.next; - } - - return result; - }; - - List.prototype.toJSON = List.prototype.toArray; - - List.prototype.isEmpty = function() { - return this.head === null; - }; - - List.prototype.first = function() { - return this.head && this.head.data; - }; - - List.prototype.last = function() { - return this.tail && this.tail.data; - }; - - List.prototype.each = function(fn, context) { - var item; - - if (context === undefined) { - context = this; - } - - // push cursor - var cursor = allocateCursor(this, null, this.head); - - while (cursor.next !== null) { - item = cursor.next; - cursor.next = item.next; - - fn.call(context, item.data, item, this); - } - - // pop cursor - releaseCursor(this); - }; - - List.prototype.forEach = List.prototype.each; - - List.prototype.eachRight = function(fn, context) { - var item; - - if (context === undefined) { - context = this; - } - - // push cursor - var cursor = allocateCursor(this, this.tail, null); - - while (cursor.prev !== null) { - item = cursor.prev; - cursor.prev = item.prev; - - fn.call(context, item.data, item, this); - } - - // pop cursor - releaseCursor(this); - }; - - List.prototype.forEachRight = List.prototype.eachRight; - - List.prototype.nextUntil = function(start, fn, context) { - if (start === null) { - return; - } - - var item; - - if (context === undefined) { - context = this; - } - - // push cursor - var cursor = allocateCursor(this, null, start); - - while (cursor.next !== null) { - item = cursor.next; - cursor.next = item.next; - - if (fn.call(context, item.data, item, this)) { - break; - } - } - - // pop cursor - releaseCursor(this); - }; - - List.prototype.prevUntil = function(start, fn, context) { - if (start === null) { - return; - } - - var item; - - if (context === undefined) { - context = this; - } - - // push cursor - var cursor = allocateCursor(this, start, null); - - while (cursor.prev !== null) { - item = cursor.prev; - cursor.prev = item.prev; - - if (fn.call(context, item.data, item, this)) { - break; - } - } - - // pop cursor - releaseCursor(this); - }; - - List.prototype.some = function(fn, context) { - var cursor = this.head; - - if (context === undefined) { - context = this; - } - - while (cursor !== null) { - if (fn.call(context, cursor.data, cursor, this)) { - return true; - } - - cursor = cursor.next; - } - - return false; - }; - - List.prototype.map = function(fn, context) { - var result = new List(); - var cursor = this.head; - - if (context === undefined) { - context = this; - } - - while (cursor !== null) { - result.appendData(fn.call(context, cursor.data, cursor, this)); - cursor = cursor.next; - } - - return result; - }; - - List.prototype.filter = function(fn, context) { - var result = new List(); - var cursor = this.head; - - if (context === undefined) { - context = this; - } - - while (cursor !== null) { - if (fn.call(context, cursor.data, cursor, this)) { - result.appendData(cursor.data); - } - cursor = cursor.next; - } - - return result; - }; - - List.prototype.clear = function() { - this.head = null; - this.tail = null; - }; - - List.prototype.copy = function() { - var result = new List(); - var cursor = this.head; - - while (cursor !== null) { - result.insert(createItem(cursor.data)); - cursor = cursor.next; - } - - return result; - }; - - List.prototype.prepend = function(item) { - // head - // ^ - // item - this.updateCursors(null, item, this.head, item); - - // insert to the beginning of the list - if (this.head !== null) { - // new item <- first item - this.head.prev = item; - - // new item -> first item - item.next = this.head; - } else { - // if list has no head, then it also has no tail - // in this case tail points to the new item - this.tail = item; - } - - // head always points to new item - this.head = item; - - return this; - }; - - List.prototype.prependData = function(data) { - return this.prepend(createItem(data)); - }; - - List.prototype.append = function(item) { - return this.insert(item); - }; - - List.prototype.appendData = function(data) { - return this.insert(createItem(data)); - }; - - List.prototype.insert = function(item, before) { - if (before !== undefined && before !== null) { - // prev before - // ^ - // item - this.updateCursors(before.prev, item, before, item); - - if (before.prev === null) { - // insert to the beginning of list - if (this.head !== before) { - throw new Error('before doesn\'t belong to list'); - } - - // since head points to before therefore list doesn't empty - // no need to check tail - this.head = item; - before.prev = item; - item.next = before; - - this.updateCursors(null, item); - } else { - - // insert between two items - before.prev.next = item; - item.prev = before.prev; - - before.prev = item; - item.next = before; - } - } else { - // tail - // ^ - // item - this.updateCursors(this.tail, item, null, item); - - // insert to the ending of the list - if (this.tail !== null) { - // last item -> new item - this.tail.next = item; - - // last item <- new item - item.prev = this.tail; - } else { - // if list has no tail, then it also has no head - // in this case head points to new item - this.head = item; - } - - // tail always points to new item - this.tail = item; - } - - return this; - }; - - List.prototype.insertData = function(data, before) { - return this.insert(createItem(data), before); - }; - - List.prototype.remove = function(item) { - // item - // ^ - // prev next - this.updateCursors(item, item.prev, item, item.next); - - if (item.prev !== null) { - item.prev.next = item.next; - } else { - if (this.head !== item) { - throw new Error('item doesn\'t belong to list'); - } - - this.head = item.next; - } - - if (item.next !== null) { - item.next.prev = item.prev; - } else { - if (this.tail !== item) { - throw new Error('item doesn\'t belong to list'); - } - - this.tail = item.prev; - } - - item.prev = null; - item.next = null; - - return item; - }; - - List.prototype.push = function(data) { - this.insert(createItem(data)); - }; - - List.prototype.pop = function() { - if (this.tail !== null) { - return this.remove(this.tail); - } - }; - - List.prototype.unshift = function(data) { - this.prepend(createItem(data)); - }; - - List.prototype.shift = function() { - if (this.head !== null) { - return this.remove(this.head); - } - }; - - List.prototype.prependList = function(list) { - return this.insertList(list, this.head); - }; - - List.prototype.appendList = function(list) { - return this.insertList(list); - }; - - List.prototype.insertList = function(list, before) { - // ignore empty lists - if (list.head === null) { - return this; - } - - if (before !== undefined && before !== null) { - this.updateCursors(before.prev, list.tail, before, list.head); - - // insert in the middle of dist list - if (before.prev !== null) { - // before.prev <-> list.head - before.prev.next = list.head; - list.head.prev = before.prev; - } else { - this.head = list.head; - } - - before.prev = list.tail; - list.tail.next = before; - } else { - this.updateCursors(this.tail, list.tail, null, list.head); - - // insert to end of the list - if (this.tail !== null) { - // if destination list has a tail, then it also has a head, - // but head doesn't change - - // dest tail -> source head - this.tail.next = list.head; - - // dest tail <- source head - list.head.prev = this.tail; - } else { - // if list has no a tail, then it also has no a head - // in this case points head to new item - this.head = list.head; - } - - // tail always start point to new item - this.tail = list.tail; - } - - list.head = null; - list.tail = null; - - return this; - }; - - List.prototype.replace = function(oldItem, newItemOrList) { - if ('head' in newItemOrList) { - this.insertList(newItemOrList, oldItem); - } else { - this.insert(newItemOrList, oldItem); - } - - this.remove(oldItem); - }; - - var List_1 = List; - - var createCustomError = function createCustomError(name, message) { - // use Object.create(), because some VMs prevent setting line/column otherwise - // (iOS Safari 10 even throws an exception) - var error = Object.create(SyntaxError.prototype); - var errorStack = new Error(); - - error.name = name; - error.message = message; - - Object.defineProperty(error, 'stack', { - get: function() { - return (errorStack.stack || '').replace(/^(.+\n){1,3}/, name + ': ' + message + '\n'); - } - }); - - return error; - }; - - var MAX_LINE_LENGTH = 100; - var OFFSET_CORRECTION = 60; - var TAB_REPLACEMENT = ' '; - - function sourceFragment(error, extraLines) { - function processLines(start, end) { - return lines.slice(start, end).map(function(line, idx) { - var num = String(start + idx + 1); - - while (num.length < maxNumLength) { - num = ' ' + num; - } - - return num + ' |' + line; - }).join('\n'); - } - - var lines = error.source.split(/\r\n?|\n|\f/); - var line = error.line; - var column = error.column; - var startLine = Math.max(1, line - extraLines) - 1; - var endLine = Math.min(line + extraLines, lines.length + 1); - var maxNumLength = Math.max(4, String(endLine).length) + 1; - var cutLeft = 0; - - // column correction according to replaced tab before column - column += (TAB_REPLACEMENT.length - 1) * (lines[line - 1].substr(0, column - 1).match(/\t/g) || []).length; - - if (column > MAX_LINE_LENGTH) { - cutLeft = column - OFFSET_CORRECTION + 3; - column = OFFSET_CORRECTION - 2; - } - - for (var i = startLine; i <= endLine; i++) { - if (i >= 0 && i < lines.length) { - lines[i] = lines[i].replace(/\t/g, TAB_REPLACEMENT); - lines[i] = - (cutLeft > 0 && lines[i].length > cutLeft ? '\u2026' : '') + - lines[i].substr(cutLeft, MAX_LINE_LENGTH - 2) + - (lines[i].length > cutLeft + MAX_LINE_LENGTH - 1 ? '\u2026' : ''); - } - } - - return [ - processLines(startLine, line), - new Array(column + maxNumLength + 2).join('-') + '^', - processLines(line, endLine) - ].filter(Boolean).join('\n'); - } - - var SyntaxError$1 = function(message, source, offset, line, column) { - var error = createCustomError('SyntaxError', message); - - error.source = source; - error.offset = offset; - error.line = line; - error.column = column; - - error.sourceFragment = function(extraLines) { - return sourceFragment(error, isNaN(extraLines) ? 0 : extraLines); - }; - Object.defineProperty(error, 'formattedMessage', { - get: function() { - return ( - 'Parse error: ' + error.message + '\n' + - sourceFragment(error, 2) - ); - } - }); - - // for backward capability - error.parseError = { - offset: offset, - line: line, - column: column - }; - - return error; - }; - - var _SyntaxError = SyntaxError$1; - - // CSS Syntax Module Level 3 - // https://www.w3.org/TR/css-syntax-3/ - var TYPE = { - EOF: 0, // - Ident: 1, // - Function: 2, // - AtKeyword: 3, // - Hash: 4, // - String: 5, // - BadString: 6, // - Url: 7, // - BadUrl: 8, // - Delim: 9, // - Number: 10, // - Percentage: 11, // - Dimension: 12, // - WhiteSpace: 13, // - CDO: 14, // - CDC: 15, // - Colon: 16, // : - Semicolon: 17, // ; - Comma: 18, // , - LeftSquareBracket: 19, // <[-token> - RightSquareBracket: 20, // <]-token> - LeftParenthesis: 21, // <(-token> - RightParenthesis: 22, // <)-token> - LeftCurlyBracket: 23, // <{-token> - RightCurlyBracket: 24, // <}-token> - Comment: 25 - }; - - var NAME = Object.keys(TYPE).reduce(function(result, key) { - result[TYPE[key]] = key; - return result; - }, {}); - - var _const = { - TYPE: TYPE, - NAME: NAME - }; - - var EOF = 0; - - // https://drafts.csswg.org/css-syntax-3/ - // § 4.2. Definitions - - // digit - // A code point between U+0030 DIGIT ZERO (0) and U+0039 DIGIT NINE (9). - function isDigit(code) { - return code >= 0x0030 && code <= 0x0039; - } - - // hex digit - // A digit, or a code point between U+0041 LATIN CAPITAL LETTER A (A) and U+0046 LATIN CAPITAL LETTER F (F), - // or a code point between U+0061 LATIN SMALL LETTER A (a) and U+0066 LATIN SMALL LETTER F (f). - function isHexDigit(code) { - return ( - isDigit(code) || // 0 .. 9 - (code >= 0x0041 && code <= 0x0046) || // A .. F - (code >= 0x0061 && code <= 0x0066) // a .. f - ); - } - - // uppercase letter - // A code point between U+0041 LATIN CAPITAL LETTER A (A) and U+005A LATIN CAPITAL LETTER Z (Z). - function isUppercaseLetter(code) { - return code >= 0x0041 && code <= 0x005A; - } - - // lowercase letter - // A code point between U+0061 LATIN SMALL LETTER A (a) and U+007A LATIN SMALL LETTER Z (z). - function isLowercaseLetter(code) { - return code >= 0x0061 && code <= 0x007A; - } - - // letter - // An uppercase letter or a lowercase letter. - function isLetter(code) { - return isUppercaseLetter(code) || isLowercaseLetter(code); - } - - // non-ASCII code point - // A code point with a value equal to or greater than U+0080 . - function isNonAscii(code) { - return code >= 0x0080; - } - - // name-start code point - // A letter, a non-ASCII code point, or U+005F LOW LINE (_). - function isNameStart(code) { - return isLetter(code) || isNonAscii(code) || code === 0x005F; - } - - // name code point - // A name-start code point, a digit, or U+002D HYPHEN-MINUS (-). - function isName(code) { - return isNameStart(code) || isDigit(code) || code === 0x002D; - } - - // non-printable code point - // A code point between U+0000 NULL and U+0008 BACKSPACE, or U+000B LINE TABULATION, - // or a code point between U+000E SHIFT OUT and U+001F INFORMATION SEPARATOR ONE, or U+007F DELETE. - function isNonPrintable(code) { - return ( - (code >= 0x0000 && code <= 0x0008) || - (code === 0x000B) || - (code >= 0x000E && code <= 0x001F) || - (code === 0x007F) - ); - } - - // newline - // U+000A LINE FEED. Note that U+000D CARRIAGE RETURN and U+000C FORM FEED are not included in this definition, - // as they are converted to U+000A LINE FEED during preprocessing. - // TODO: we doesn't do a preprocessing, so check a code point for U+000D CARRIAGE RETURN and U+000C FORM FEED - function isNewline(code) { - return code === 0x000A || code === 0x000D || code === 0x000C; - } - - // whitespace - // A newline, U+0009 CHARACTER TABULATION, or U+0020 SPACE. - function isWhiteSpace(code) { - return isNewline(code) || code === 0x0020 || code === 0x0009; - } - - // § 4.3.8. Check if two code points are a valid escape - function isValidEscape(first, second) { - // If the first code point is not U+005C REVERSE SOLIDUS (\), return false. - if (first !== 0x005C) { - return false; - } - - // Otherwise, if the second code point is a newline or EOF, return false. - if (isNewline(second) || second === EOF) { - return false; - } - - // Otherwise, return true. - return true; - } - - // § 4.3.9. Check if three code points would start an identifier - function isIdentifierStart(first, second, third) { - // Look at the first code point: - - // U+002D HYPHEN-MINUS - if (first === 0x002D) { - // If the second code point is a name-start code point or a U+002D HYPHEN-MINUS, - // or the second and third code points are a valid escape, return true. Otherwise, return false. - return ( - isNameStart(second) || - second === 0x002D || - isValidEscape(second, third) - ); - } - - // name-start code point - if (isNameStart(first)) { - // Return true. - return true; - } - - // U+005C REVERSE SOLIDUS (\) - if (first === 0x005C) { - // If the first and second code points are a valid escape, return true. Otherwise, return false. - return isValidEscape(first, second); - } - - // anything else - // Return false. - return false; - } - - // § 4.3.10. Check if three code points would start a number - function isNumberStart(first, second, third) { - // Look at the first code point: - - // U+002B PLUS SIGN (+) - // U+002D HYPHEN-MINUS (-) - if (first === 0x002B || first === 0x002D) { - // If the second code point is a digit, return true. - if (isDigit(second)) { - return 2; - } - - // Otherwise, if the second code point is a U+002E FULL STOP (.) - // and the third code point is a digit, return true. - // Otherwise, return false. - return second === 0x002E && isDigit(third) ? 3 : 0; - } - - // U+002E FULL STOP (.) - if (first === 0x002E) { - // If the second code point is a digit, return true. Otherwise, return false. - return isDigit(second) ? 2 : 0; - } - - // digit - if (isDigit(first)) { - // Return true. - return 1; - } - - // anything else - // Return false. - return 0; - } - - // - // Misc - // - - // detect BOM (https://en.wikipedia.org/wiki/Byte_order_mark) - function isBOM(code) { - // UTF-16BE - if (code === 0xFEFF) { - return 1; - } - - // UTF-16LE - if (code === 0xFFFE) { - return 1; - } - - return 0; - } - - // Fast code category - // - // https://drafts.csswg.org/css-syntax/#tokenizer-definitions - // > non-ASCII code point - // > A code point with a value equal to or greater than U+0080 - // > name-start code point - // > A letter, a non-ASCII code point, or U+005F LOW LINE (_). - // > name code point - // > A name-start code point, a digit, or U+002D HYPHEN-MINUS (-) - // That means only ASCII code points has a special meaning and we define a maps for 0..127 codes only - var CATEGORY = new Array(0x80); - charCodeCategory.Eof = 0x80; - charCodeCategory.WhiteSpace = 0x82; - charCodeCategory.Digit = 0x83; - charCodeCategory.NameStart = 0x84; - charCodeCategory.NonPrintable = 0x85; - - for (var i = 0; i < CATEGORY.length; i++) { - switch (true) { - case isWhiteSpace(i): - CATEGORY[i] = charCodeCategory.WhiteSpace; - break; - - case isDigit(i): - CATEGORY[i] = charCodeCategory.Digit; - break; - - case isNameStart(i): - CATEGORY[i] = charCodeCategory.NameStart; - break; - - case isNonPrintable(i): - CATEGORY[i] = charCodeCategory.NonPrintable; - break; - - default: - CATEGORY[i] = i || charCodeCategory.Eof; - } - } - - function charCodeCategory(code) { - return code < 0x80 ? CATEGORY[code] : charCodeCategory.NameStart; - } - var charCodeDefinitions = { - isDigit: isDigit, - isHexDigit: isHexDigit, - isUppercaseLetter: isUppercaseLetter, - isLowercaseLetter: isLowercaseLetter, - isLetter: isLetter, - isNonAscii: isNonAscii, - isNameStart: isNameStart, - isName: isName, - isNonPrintable: isNonPrintable, - isNewline: isNewline, - isWhiteSpace: isWhiteSpace, - isValidEscape: isValidEscape, - isIdentifierStart: isIdentifierStart, - isNumberStart: isNumberStart, - - isBOM: isBOM, - charCodeCategory: charCodeCategory - }; - - var isDigit$1 = charCodeDefinitions.isDigit; - var isHexDigit$1 = charCodeDefinitions.isHexDigit; - var isUppercaseLetter$1 = charCodeDefinitions.isUppercaseLetter; - var isName$1 = charCodeDefinitions.isName; - var isWhiteSpace$1 = charCodeDefinitions.isWhiteSpace; - var isValidEscape$1 = charCodeDefinitions.isValidEscape; - - function getCharCode(source, offset) { - return offset < source.length ? source.charCodeAt(offset) : 0; - } - - function getNewlineLength(source, offset, code) { - if (code === 13 /* \r */ && getCharCode(source, offset + 1) === 10 /* \n */) { - return 2; - } - - return 1; - } - - function cmpChar(testStr, offset, referenceCode) { - var code = testStr.charCodeAt(offset); - - // code.toLowerCase() for A..Z - if (isUppercaseLetter$1(code)) { - code = code | 32; - } - - return code === referenceCode; - } - - function cmpStr(testStr, start, end, referenceStr) { - if (end - start !== referenceStr.length) { - return false; - } - - if (start < 0 || end > testStr.length) { - return false; - } - - for (var i = start; i < end; i++) { - var testCode = testStr.charCodeAt(i); - var referenceCode = referenceStr.charCodeAt(i - start); - - // testCode.toLowerCase() for A..Z - if (isUppercaseLetter$1(testCode)) { - testCode = testCode | 32; - } - - if (testCode !== referenceCode) { - return false; - } - } - - return true; - } - - function findWhiteSpaceStart(source, offset) { - for (; offset >= 0; offset--) { - if (!isWhiteSpace$1(source.charCodeAt(offset))) { - break; - } - } - - return offset + 1; - } - - function findWhiteSpaceEnd(source, offset) { - for (; offset < source.length; offset++) { - if (!isWhiteSpace$1(source.charCodeAt(offset))) { - break; - } - } - - return offset; - } - - function findDecimalNumberEnd(source, offset) { - for (; offset < source.length; offset++) { - if (!isDigit$1(source.charCodeAt(offset))) { - break; - } - } - - return offset; - } - - // § 4.3.7. Consume an escaped code point - function consumeEscaped(source, offset) { - // It assumes that the U+005C REVERSE SOLIDUS (\) has already been consumed and - // that the next input code point has already been verified to be part of a valid escape. - offset += 2; - - // hex digit - if (isHexDigit$1(getCharCode(source, offset - 1))) { - // Consume as many hex digits as possible, but no more than 5. - // Note that this means 1-6 hex digits have been consumed in total. - for (var maxOffset = Math.min(source.length, offset + 5); offset < maxOffset; offset++) { - if (!isHexDigit$1(getCharCode(source, offset))) { - break; - } - } - - // If the next input code point is whitespace, consume it as well. - var code = getCharCode(source, offset); - if (isWhiteSpace$1(code)) { - offset += getNewlineLength(source, offset, code); - } - } - - return offset; - } - - // §4.3.11. Consume a name - // Note: This algorithm does not do the verification of the first few code points that are necessary - // to ensure the returned code points would constitute an . If that is the intended use, - // ensure that the stream starts with an identifier before calling this algorithm. - function consumeName(source, offset) { - // Let result initially be an empty string. - // Repeatedly consume the next input code point from the stream: - for (; offset < source.length; offset++) { - var code = source.charCodeAt(offset); - - // name code point - if (isName$1(code)) { - // Append the code point to result. - continue; - } - - // the stream starts with a valid escape - if (isValidEscape$1(code, getCharCode(source, offset + 1))) { - // Consume an escaped code point. Append the returned code point to result. - offset = consumeEscaped(source, offset) - 1; - continue; - } - - // anything else - // Reconsume the current input code point. Return result. - break; - } - - return offset; - } - - // §4.3.12. Consume a number - function consumeNumber(source, offset) { - var code = source.charCodeAt(offset); - - // 2. If the next input code point is U+002B PLUS SIGN (+) or U+002D HYPHEN-MINUS (-), - // consume it and append it to repr. - if (code === 0x002B || code === 0x002D) { - code = source.charCodeAt(offset += 1); - } - - // 3. While the next input code point is a digit, consume it and append it to repr. - if (isDigit$1(code)) { - offset = findDecimalNumberEnd(source, offset + 1); - code = source.charCodeAt(offset); - } - - // 4. If the next 2 input code points are U+002E FULL STOP (.) followed by a digit, then: - if (code === 0x002E && isDigit$1(source.charCodeAt(offset + 1))) { - // 4.1 Consume them. - // 4.2 Append them to repr. - code = source.charCodeAt(offset += 2); - - // 4.3 Set type to "number". - // TODO - - // 4.4 While the next input code point is a digit, consume it and append it to repr. - - offset = findDecimalNumberEnd(source, offset); - } - - // 5. If the next 2 or 3 input code points are U+0045 LATIN CAPITAL LETTER E (E) - // or U+0065 LATIN SMALL LETTER E (e), ... , followed by a digit, then: - if (cmpChar(source, offset, 101 /* e */)) { - var sign = 0; - code = source.charCodeAt(offset + 1); - - // ... optionally followed by U+002D HYPHEN-MINUS (-) or U+002B PLUS SIGN (+) ... - if (code === 0x002D || code === 0x002B) { - sign = 1; - code = source.charCodeAt(offset + 2); - } - - // ... followed by a digit - if (isDigit$1(code)) { - // 5.1 Consume them. - // 5.2 Append them to repr. - - // 5.3 Set type to "number". - // TODO - - // 5.4 While the next input code point is a digit, consume it and append it to repr. - offset = findDecimalNumberEnd(source, offset + 1 + sign + 1); - } - } - - return offset; - } - - // § 4.3.14. Consume the remnants of a bad url - // ... its sole use is to consume enough of the input stream to reach a recovery point - // where normal tokenizing can resume. - function consumeBadUrlRemnants(source, offset) { - // Repeatedly consume the next input code point from the stream: - for (; offset < source.length; offset++) { - var code = source.charCodeAt(offset); - - // U+0029 RIGHT PARENTHESIS ()) - // EOF - if (code === 0x0029) { - // Return. - offset++; - break; - } - - if (isValidEscape$1(code, getCharCode(source, offset + 1))) { - // Consume an escaped code point. - // Note: This allows an escaped right parenthesis ("\)") to be encountered - // without ending the . This is otherwise identical to - // the "anything else" clause. - offset = consumeEscaped(source, offset); - } - } - - return offset; - } - - var utils = { - consumeEscaped: consumeEscaped, - consumeName: consumeName, - consumeNumber: consumeNumber, - consumeBadUrlRemnants: consumeBadUrlRemnants, - - cmpChar: cmpChar, - cmpStr: cmpStr, - - getNewlineLength: getNewlineLength, - findWhiteSpaceStart: findWhiteSpaceStart, - findWhiteSpaceEnd: findWhiteSpaceEnd - }; - - var TYPE$1 = _const.TYPE; - var NAME$1 = _const.NAME; - - - var cmpStr$1 = utils.cmpStr; - - var EOF$1 = TYPE$1.EOF; - var WHITESPACE = TYPE$1.WhiteSpace; - var COMMENT = TYPE$1.Comment; - - var OFFSET_MASK = 0x00FFFFFF; - var TYPE_SHIFT = 24; - - var TokenStream = function() { - this.offsetAndType = null; - this.balance = null; - - this.reset(); - }; - - TokenStream.prototype = { - reset: function() { - this.eof = false; - this.tokenIndex = -1; - this.tokenType = 0; - this.tokenStart = this.firstCharOffset; - this.tokenEnd = this.firstCharOffset; - }, - - lookupType: function(offset) { - offset += this.tokenIndex; - - if (offset < this.tokenCount) { - return this.offsetAndType[offset] >> TYPE_SHIFT; - } - - return EOF$1; - }, - lookupOffset: function(offset) { - offset += this.tokenIndex; - - if (offset < this.tokenCount) { - return this.offsetAndType[offset - 1] & OFFSET_MASK; - } - - return this.source.length; - }, - lookupValue: function(offset, referenceStr) { - offset += this.tokenIndex; - - if (offset < this.tokenCount) { - return cmpStr$1( - this.source, - this.offsetAndType[offset - 1] & OFFSET_MASK, - this.offsetAndType[offset] & OFFSET_MASK, - referenceStr - ); - } - - return false; - }, - getTokenStart: function(tokenIndex) { - if (tokenIndex === this.tokenIndex) { - return this.tokenStart; - } - - if (tokenIndex > 0) { - return tokenIndex < this.tokenCount - ? this.offsetAndType[tokenIndex - 1] & OFFSET_MASK - : this.offsetAndType[this.tokenCount] & OFFSET_MASK; - } - - return this.firstCharOffset; - }, - - // TODO: -> skipUntilBalanced - getRawLength: function(startToken, mode) { - var cursor = startToken; - var balanceEnd; - var offset = this.offsetAndType[Math.max(cursor - 1, 0)] & OFFSET_MASK; - var type; - - loop: - for (; cursor < this.tokenCount; cursor++) { - balanceEnd = this.balance[cursor]; - - // stop scanning on balance edge that points to offset before start token - if (balanceEnd < startToken) { - break loop; - } - - type = this.offsetAndType[cursor] >> TYPE_SHIFT; - - // check token is stop type - switch (mode(type, this.source, offset)) { - case 1: - break loop; - - case 2: - cursor++; - break loop; - - default: - offset = this.offsetAndType[cursor] & OFFSET_MASK; - - // fast forward to the end of balanced block - if (this.balance[balanceEnd] === cursor) { - cursor = balanceEnd; - } - } - } - - return cursor - this.tokenIndex; - }, - isBalanceEdge: function(pos) { - return this.balance[this.tokenIndex] < pos; - }, - isDelim: function(code, offset) { - if (offset) { - return ( - this.lookupType(offset) === TYPE$1.Delim && - this.source.charCodeAt(this.lookupOffset(offset)) === code - ); - } - - return ( - this.tokenType === TYPE$1.Delim && - this.source.charCodeAt(this.tokenStart) === code - ); - }, - - getTokenValue: function() { - return this.source.substring(this.tokenStart, this.tokenEnd); - }, - getTokenLength: function() { - return this.tokenEnd - this.tokenStart; - }, - substrToCursor: function(start) { - return this.source.substring(start, this.tokenStart); - }, - - skipWS: function() { - for (var i = this.tokenIndex, skipTokenCount = 0; i < this.tokenCount; i++, skipTokenCount++) { - if ((this.offsetAndType[i] >> TYPE_SHIFT) !== WHITESPACE) { - break; - } - } - - if (skipTokenCount > 0) { - this.skip(skipTokenCount); - } - }, - skipSC: function() { - while (this.tokenType === WHITESPACE || this.tokenType === COMMENT) { - this.next(); - } - }, - skip: function(tokenCount) { - var next = this.tokenIndex + tokenCount; - - if (next < this.tokenCount) { - this.tokenIndex = next; - this.tokenStart = this.offsetAndType[next - 1] & OFFSET_MASK; - next = this.offsetAndType[next]; - this.tokenType = next >> TYPE_SHIFT; - this.tokenEnd = next & OFFSET_MASK; - } else { - this.tokenIndex = this.tokenCount; - this.next(); - } - }, - next: function() { - var next = this.tokenIndex + 1; - - if (next < this.tokenCount) { - this.tokenIndex = next; - this.tokenStart = this.tokenEnd; - next = this.offsetAndType[next]; - this.tokenType = next >> TYPE_SHIFT; - this.tokenEnd = next & OFFSET_MASK; - } else { - this.tokenIndex = this.tokenCount; - this.eof = true; - this.tokenType = EOF$1; - this.tokenStart = this.tokenEnd = this.source.length; - } - }, - - dump: function() { - var offset = this.firstCharOffset; - - return Array.prototype.slice.call(this.offsetAndType, 0, this.tokenCount).map(function(item, idx) { - var start = offset; - var end = item & OFFSET_MASK; - - offset = end; - - return { - idx: idx, - type: NAME$1[item >> TYPE_SHIFT], - chunk: this.source.substring(start, end), - balance: this.balance[idx] - }; - }, this); - } - }; - - var TokenStream_1 = TokenStream; - - function noop$1(value) { - return value; - } - - function generateMultiplier(multiplier) { - if (multiplier.min === 0 && multiplier.max === 0) { - return '*'; - } - - if (multiplier.min === 0 && multiplier.max === 1) { - return '?'; - } - - if (multiplier.min === 1 && multiplier.max === 0) { - return multiplier.comma ? '#' : '+'; - } - - if (multiplier.min === 1 && multiplier.max === 1) { - return ''; - } - - return ( - (multiplier.comma ? '#' : '') + - (multiplier.min === multiplier.max - ? '{' + multiplier.min + '}' - : '{' + multiplier.min + ',' + (multiplier.max !== 0 ? multiplier.max : '') + '}' - ) - ); - } - - function generateTypeOpts(node) { - switch (node.type) { - case 'Range': - return ( - ' [' + - (node.min === null ? '-∞' : node.min) + - ',' + - (node.max === null ? '∞' : node.max) + - ']' - ); - - default: - throw new Error('Unknown node type `' + node.type + '`'); - } - } - - function generateSequence(node, decorate, forceBraces, compact) { - var combinator = node.combinator === ' ' || compact ? node.combinator : ' ' + node.combinator + ' '; - var result = node.terms.map(function(term) { - return generate(term, decorate, forceBraces, compact); - }).join(combinator); - - if (node.explicit || forceBraces) { - result = (compact || result[0] === ',' ? '[' : '[ ') + result + (compact ? ']' : ' ]'); - } - - return result; - } - - function generate(node, decorate, forceBraces, compact) { - var result; - - switch (node.type) { - case 'Group': - result = - generateSequence(node, decorate, forceBraces, compact) + - (node.disallowEmpty ? '!' : ''); - break; - - case 'Multiplier': - // return since node is a composition - return ( - generate(node.term, decorate, forceBraces, compact) + - decorate(generateMultiplier(node), node) - ); - - case 'Type': - result = '<' + node.name + (node.opts ? decorate(generateTypeOpts(node.opts), node.opts) : '') + '>'; - break; - - case 'Property': - result = '<\'' + node.name + '\'>'; - break; - - case 'Keyword': - result = node.name; - break; - - case 'AtKeyword': - result = '@' + node.name; - break; - - case 'Function': - result = node.name + '('; - break; - - case 'String': - case 'Token': - result = node.value; - break; - - case 'Comma': - result = ','; - break; - - default: - throw new Error('Unknown node type `' + node.type + '`'); - } - - return decorate(result, node); - } - - var generate_1 = function(node, options) { - var decorate = noop$1; - var forceBraces = false; - var compact = false; - - if (typeof options === 'function') { - decorate = options; - } else if (options) { - forceBraces = Boolean(options.forceBraces); - compact = Boolean(options.compact); - if (typeof options.decorate === 'function') { - decorate = options.decorate; - } - } - - return generate(node, decorate, forceBraces, compact); - }; - - function fromMatchResult(matchResult) { - var tokens = matchResult.tokens; - var longestMatch = matchResult.longestMatch; - var node = longestMatch < tokens.length ? tokens[longestMatch].node : null; - var mismatchOffset = -1; - var entries = 0; - var css = ''; - - for (var i = 0; i < tokens.length; i++) { - if (i === longestMatch) { - mismatchOffset = css.length; - } - - if (node !== null && tokens[i].node === node) { - if (i <= longestMatch) { - entries++; - } else { - entries = 0; - } - } - - css += tokens[i].value; - } - - return { - node: node, - css: css, - mismatchOffset: mismatchOffset === -1 ? css.length : mismatchOffset, - last: node === null || entries > 1 - }; - } - - function getLocation(node, point) { - var loc = node && node.loc && node.loc[point]; - - if (loc) { - return { - offset: loc.offset, - line: loc.line, - column: loc.column - }; - } - - return null; - } - - var SyntaxReferenceError = function(type, referenceName) { - var error = createCustomError( - 'SyntaxReferenceError', - type + (referenceName ? ' `' + referenceName + '`' : '') - ); - - error.reference = referenceName; - - return error; - }; - - var MatchError = function(message, syntax, node, matchResult) { - var error = createCustomError('SyntaxMatchError', message); - var details = fromMatchResult(matchResult); - var mismatchOffset = details.mismatchOffset || 0; - var badNode = details.node || node; - var end = getLocation(badNode, 'end'); - var start = details.last ? end : getLocation(badNode, 'start'); - var css = details.css; - - error.rawMessage = message; - error.syntax = syntax ? generate_1(syntax) : ''; - error.css = css; - error.mismatchOffset = mismatchOffset; - error.loc = { - source: (badNode && badNode.loc && badNode.loc.source) || '', - start: start, - end: end - }; - error.line = start ? start.line : undefined; - error.column = start ? start.column : undefined; - error.offset = start ? start.offset : undefined; - error.message = message + '\n' + - ' syntax: ' + error.syntax + '\n' + - ' value: ' + (error.css || '') + '\n' + - ' --------' + new Array(error.mismatchOffset + 1).join('-') + '^'; - - return error; - }; - - var error = { - SyntaxReferenceError: SyntaxReferenceError, - MatchError: MatchError - }; - - var hasOwnProperty = Object.prototype.hasOwnProperty; - var keywords = Object.create(null); - var properties = Object.create(null); - var HYPHENMINUS = 45; // '-'.charCodeAt() - - function isCustomProperty(str, offset) { - offset = offset || 0; - - return str.length - offset >= 2 && - str.charCodeAt(offset) === HYPHENMINUS && - str.charCodeAt(offset + 1) === HYPHENMINUS; - } - - function getVendorPrefix(str, offset) { - offset = offset || 0; - - // verdor prefix should be at least 3 chars length - if (str.length - offset >= 3) { - // vendor prefix starts with hyper minus following non-hyper minus - if (str.charCodeAt(offset) === HYPHENMINUS && - str.charCodeAt(offset + 1) !== HYPHENMINUS) { - // vendor prefix should contain a hyper minus at the ending - var secondDashIndex = str.indexOf('-', offset + 2); - - if (secondDashIndex !== -1) { - return str.substring(offset, secondDashIndex + 1); - } - } - } - - return ''; - } - - function getKeywordDescriptor(keyword) { - if (hasOwnProperty.call(keywords, keyword)) { - return keywords[keyword]; - } - - var name = keyword.toLowerCase(); - - if (hasOwnProperty.call(keywords, name)) { - return keywords[keyword] = keywords[name]; - } - - var custom = isCustomProperty(name, 0); - var vendor = !custom ? getVendorPrefix(name, 0) : ''; - - return keywords[keyword] = Object.freeze({ - basename: name.substr(vendor.length), - name: name, - vendor: vendor, - prefix: vendor, - custom: custom - }); - } - - function getPropertyDescriptor(property) { - if (hasOwnProperty.call(properties, property)) { - return properties[property]; - } - - var name = property; - var hack = property[0]; - - if (hack === '/') { - hack = property[1] === '/' ? '//' : '/'; - } else if (hack !== '_' && - hack !== '*' && - hack !== '$' && - hack !== '#' && - hack !== '+' && - hack !== '&') { - hack = ''; - } - - var custom = isCustomProperty(name, hack.length); - - // re-use result when possible (the same as for lower case) - if (!custom) { - name = name.toLowerCase(); - if (hasOwnProperty.call(properties, name)) { - return properties[property] = properties[name]; - } - } - - var vendor = !custom ? getVendorPrefix(name, hack.length) : ''; - var prefix = name.substr(0, hack.length + vendor.length); - - return properties[property] = Object.freeze({ - basename: name.substr(prefix.length), - name: name.substr(hack.length), - hack: hack, - vendor: vendor, - prefix: prefix, - custom: custom - }); - } - - var names = { - keyword: getKeywordDescriptor, - property: getPropertyDescriptor, - isCustomProperty: isCustomProperty, - vendorPrefix: getVendorPrefix - }; - - var MIN_SIZE = 16 * 1024; - var SafeUint32Array = typeof Uint32Array !== 'undefined' ? Uint32Array : Array; // fallback on Array when TypedArray is not supported - - var adoptBuffer = function adoptBuffer(buffer, size) { - if (buffer === null || buffer.length < size) { - return new SafeUint32Array(Math.max(size + 1024, MIN_SIZE)); - } - - return buffer; - }; - - var TYPE$2 = _const.TYPE; - - - var isNewline$1 = charCodeDefinitions.isNewline; - var isName$2 = charCodeDefinitions.isName; - var isValidEscape$2 = charCodeDefinitions.isValidEscape; - var isNumberStart$1 = charCodeDefinitions.isNumberStart; - var isIdentifierStart$1 = charCodeDefinitions.isIdentifierStart; - var charCodeCategory$1 = charCodeDefinitions.charCodeCategory; - var isBOM$1 = charCodeDefinitions.isBOM; - - - var cmpStr$2 = utils.cmpStr; - var getNewlineLength$1 = utils.getNewlineLength; - var findWhiteSpaceEnd$1 = utils.findWhiteSpaceEnd; - var consumeEscaped$1 = utils.consumeEscaped; - var consumeName$1 = utils.consumeName; - var consumeNumber$1 = utils.consumeNumber; - var consumeBadUrlRemnants$1 = utils.consumeBadUrlRemnants; - - var OFFSET_MASK$1 = 0x00FFFFFF; - var TYPE_SHIFT$1 = 24; - - function tokenize(source, stream) { - function getCharCode(offset) { - return offset < sourceLength ? source.charCodeAt(offset) : 0; - } - - // § 4.3.3. Consume a numeric token - function consumeNumericToken() { - // Consume a number and let number be the result. - offset = consumeNumber$1(source, offset); - - // If the next 3 input code points would start an identifier, then: - if (isIdentifierStart$1(getCharCode(offset), getCharCode(offset + 1), getCharCode(offset + 2))) { - // Create a with the same value and type flag as number, and a unit set initially to the empty string. - // Consume a name. Set the ’s unit to the returned value. - // Return the . - type = TYPE$2.Dimension; - offset = consumeName$1(source, offset); - return; - } - - // Otherwise, if the next input code point is U+0025 PERCENTAGE SIGN (%), consume it. - if (getCharCode(offset) === 0x0025) { - // Create a with the same value as number, and return it. - type = TYPE$2.Percentage; - offset++; - return; - } - - // Otherwise, create a with the same value and type flag as number, and return it. - type = TYPE$2.Number; - } - - // § 4.3.4. Consume an ident-like token - function consumeIdentLikeToken() { - const nameStartOffset = offset; - - // Consume a name, and let string be the result. - offset = consumeName$1(source, offset); - - // If string’s value is an ASCII case-insensitive match for "url", - // and the next input code point is U+0028 LEFT PARENTHESIS ((), consume it. - if (cmpStr$2(source, nameStartOffset, offset, 'url') && getCharCode(offset) === 0x0028) { - // While the next two input code points are whitespace, consume the next input code point. - offset = findWhiteSpaceEnd$1(source, offset + 1); - - // If the next one or two input code points are U+0022 QUOTATION MARK ("), U+0027 APOSTROPHE ('), - // or whitespace followed by U+0022 QUOTATION MARK (") or U+0027 APOSTROPHE ('), - // then create a with its value set to string and return it. - if (getCharCode(offset) === 0x0022 || - getCharCode(offset) === 0x0027) { - type = TYPE$2.Function; - offset = nameStartOffset + 4; - return; - } - - // Otherwise, consume a url token, and return it. - consumeUrlToken(); - return; - } - - // Otherwise, if the next input code point is U+0028 LEFT PARENTHESIS ((), consume it. - // Create a with its value set to string and return it. - if (getCharCode(offset) === 0x0028) { - type = TYPE$2.Function; - offset++; - return; - } - - // Otherwise, create an with its value set to string and return it. - type = TYPE$2.Ident; - } - - // § 4.3.5. Consume a string token - function consumeStringToken(endingCodePoint) { - // This algorithm may be called with an ending code point, which denotes the code point - // that ends the string. If an ending code point is not specified, - // the current input code point is used. - if (!endingCodePoint) { - endingCodePoint = getCharCode(offset++); - } - - // Initially create a with its value set to the empty string. - type = TYPE$2.String; - - // Repeatedly consume the next input code point from the stream: - for (; offset < source.length; offset++) { - var code = source.charCodeAt(offset); - - switch (charCodeCategory$1(code)) { - // ending code point - case endingCodePoint: - // Return the . - offset++; - return; - - // EOF - case charCodeCategory$1.Eof: - // This is a parse error. Return the . - return; - - // newline - case charCodeCategory$1.WhiteSpace: - if (isNewline$1(code)) { - // This is a parse error. Reconsume the current input code point, - // create a , and return it. - offset += getNewlineLength$1(source, offset, code); - type = TYPE$2.BadString; - return; - } - break; - - // U+005C REVERSE SOLIDUS (\) - case 0x005C: - // If the next input code point is EOF, do nothing. - if (offset === source.length - 1) { - break; - } - - var nextCode = getCharCode(offset + 1); - - // Otherwise, if the next input code point is a newline, consume it. - if (isNewline$1(nextCode)) { - offset += getNewlineLength$1(source, offset + 1, nextCode); - } else if (isValidEscape$2(code, nextCode)) { - // Otherwise, (the stream starts with a valid escape) consume - // an escaped code point and append the returned code point to - // the ’s value. - offset = consumeEscaped$1(source, offset) - 1; - } - break; - - // anything else - // Append the current input code point to the ’s value. - } - } - } - - // § 4.3.6. Consume a url token - // Note: This algorithm assumes that the initial "url(" has already been consumed. - // This algorithm also assumes that it’s being called to consume an "unquoted" value, like url(foo). - // A quoted value, like url("foo"), is parsed as a . Consume an ident-like token - // automatically handles this distinction; this algorithm shouldn’t be called directly otherwise. - function consumeUrlToken() { - // Initially create a with its value set to the empty string. - type = TYPE$2.Url; - - // Consume as much whitespace as possible. - offset = findWhiteSpaceEnd$1(source, offset); - - // Repeatedly consume the next input code point from the stream: - for (; offset < source.length; offset++) { - var code = source.charCodeAt(offset); - - switch (charCodeCategory$1(code)) { - // U+0029 RIGHT PARENTHESIS ()) - case 0x0029: - // Return the . - offset++; - return; - - // EOF - case charCodeCategory$1.Eof: - // This is a parse error. Return the . - return; - - // whitespace - case charCodeCategory$1.WhiteSpace: - // Consume as much whitespace as possible. - offset = findWhiteSpaceEnd$1(source, offset); - - // If the next input code point is U+0029 RIGHT PARENTHESIS ()) or EOF, - // consume it and return the - // (if EOF was encountered, this is a parse error); - if (getCharCode(offset) === 0x0029 || offset >= source.length) { - if (offset < source.length) { - offset++; - } - return; - } - - // otherwise, consume the remnants of a bad url, create a , - // and return it. - offset = consumeBadUrlRemnants$1(source, offset); - type = TYPE$2.BadUrl; - return; - - // U+0022 QUOTATION MARK (") - // U+0027 APOSTROPHE (') - // U+0028 LEFT PARENTHESIS (() - // non-printable code point - case 0x0022: - case 0x0027: - case 0x0028: - case charCodeCategory$1.NonPrintable: - // This is a parse error. Consume the remnants of a bad url, - // create a , and return it. - offset = consumeBadUrlRemnants$1(source, offset); - type = TYPE$2.BadUrl; - return; - - // U+005C REVERSE SOLIDUS (\) - case 0x005C: - // If the stream starts with a valid escape, consume an escaped code point and - // append the returned code point to the ’s value. - if (isValidEscape$2(code, getCharCode(offset + 1))) { - offset = consumeEscaped$1(source, offset) - 1; - break; - } - - // Otherwise, this is a parse error. Consume the remnants of a bad url, - // create a , and return it. - offset = consumeBadUrlRemnants$1(source, offset); - type = TYPE$2.BadUrl; - return; - - // anything else - // Append the current input code point to the ’s value. - } - } - } - - if (!stream) { - stream = new TokenStream_1(); - } - - // ensure source is a string - source = String(source || ''); - - var sourceLength = source.length; - var offsetAndType = adoptBuffer(stream.offsetAndType, sourceLength + 1); // +1 because of eof-token - var balance = adoptBuffer(stream.balance, sourceLength + 1); - var tokenCount = 0; - var start = isBOM$1(getCharCode(0)); - var offset = start; - var balanceCloseType = 0; - var balanceStart = 0; - var balancePrev = 0; - - // https://drafts.csswg.org/css-syntax-3/#consume-token - // § 4.3.1. Consume a token - while (offset < sourceLength) { - var code = source.charCodeAt(offset); - var type = 0; - - balance[tokenCount] = sourceLength; - - switch (charCodeCategory$1(code)) { - // whitespace - case charCodeCategory$1.WhiteSpace: - // Consume as much whitespace as possible. Return a . - type = TYPE$2.WhiteSpace; - offset = findWhiteSpaceEnd$1(source, offset + 1); - break; - - // U+0022 QUOTATION MARK (") - case 0x0022: - // Consume a string token and return it. - consumeStringToken(); - break; - - // U+0023 NUMBER SIGN (#) - case 0x0023: - // If the next input code point is a name code point or the next two input code points are a valid escape, then: - if (isName$2(getCharCode(offset + 1)) || isValidEscape$2(getCharCode(offset + 1), getCharCode(offset + 2))) { - // Create a . - type = TYPE$2.Hash; - - // If the next 3 input code points would start an identifier, set the ’s type flag to "id". - // if (isIdentifierStart(getCharCode(offset + 1), getCharCode(offset + 2), getCharCode(offset + 3))) { - // // TODO: set id flag - // } - - // Consume a name, and set the ’s value to the returned string. - offset = consumeName$1(source, offset + 1); - - // Return the . - } else { - // Otherwise, return a with its value set to the current input code point. - type = TYPE$2.Delim; - offset++; - } - - break; - - // U+0027 APOSTROPHE (') - case 0x0027: - // Consume a string token and return it. - consumeStringToken(); - break; - - // U+0028 LEFT PARENTHESIS (() - case 0x0028: - // Return a <(-token>. - type = TYPE$2.LeftParenthesis; - offset++; - break; - - // U+0029 RIGHT PARENTHESIS ()) - case 0x0029: - // Return a <)-token>. - type = TYPE$2.RightParenthesis; - offset++; - break; - - // U+002B PLUS SIGN (+) - case 0x002B: - // If the input stream starts with a number, ... - if (isNumberStart$1(code, getCharCode(offset + 1), getCharCode(offset + 2))) { - // ... reconsume the current input code point, consume a numeric token, and return it. - consumeNumericToken(); - } else { - // Otherwise, return a with its value set to the current input code point. - type = TYPE$2.Delim; - offset++; - } - break; - - // U+002C COMMA (,) - case 0x002C: - // Return a . - type = TYPE$2.Comma; - offset++; - break; - - // U+002D HYPHEN-MINUS (-) - case 0x002D: - // If the input stream starts with a number, reconsume the current input code point, consume a numeric token, and return it. - if (isNumberStart$1(code, getCharCode(offset + 1), getCharCode(offset + 2))) { - consumeNumericToken(); - } else { - // Otherwise, if the next 2 input code points are U+002D HYPHEN-MINUS U+003E GREATER-THAN SIGN (->), consume them and return a . - if (getCharCode(offset + 1) === 0x002D && - getCharCode(offset + 2) === 0x003E) { - type = TYPE$2.CDC; - offset = offset + 3; - } else { - // Otherwise, if the input stream starts with an identifier, ... - if (isIdentifierStart$1(code, getCharCode(offset + 1), getCharCode(offset + 2))) { - // ... reconsume the current input code point, consume an ident-like token, and return it. - consumeIdentLikeToken(); - } else { - // Otherwise, return a with its value set to the current input code point. - type = TYPE$2.Delim; - offset++; - } - } - } - break; - - // U+002E FULL STOP (.) - case 0x002E: - // If the input stream starts with a number, ... - if (isNumberStart$1(code, getCharCode(offset + 1), getCharCode(offset + 2))) { - // ... reconsume the current input code point, consume a numeric token, and return it. - consumeNumericToken(); - } else { - // Otherwise, return a with its value set to the current input code point. - type = TYPE$2.Delim; - offset++; - } - - break; - - // U+002F SOLIDUS (/) - case 0x002F: - // If the next two input code point are U+002F SOLIDUS (/) followed by a U+002A ASTERISK (*), - if (getCharCode(offset + 1) === 0x002A) { - // ... consume them and all following code points up to and including the first U+002A ASTERISK (*) - // followed by a U+002F SOLIDUS (/), or up to an EOF code point. - type = TYPE$2.Comment; - offset = source.indexOf('*/', offset + 2) + 2; - if (offset === 1) { - offset = source.length; - } - } else { - type = TYPE$2.Delim; - offset++; - } - break; - - // U+003A COLON (:) - case 0x003A: - // Return a . - type = TYPE$2.Colon; - offset++; - break; - - // U+003B SEMICOLON (;) - case 0x003B: - // Return a . - type = TYPE$2.Semicolon; - offset++; - break; - - // U+003C LESS-THAN SIGN (<) - case 0x003C: - // If the next 3 input code points are U+0021 EXCLAMATION MARK U+002D HYPHEN-MINUS U+002D HYPHEN-MINUS (!--), ... - if (getCharCode(offset + 1) === 0x0021 && - getCharCode(offset + 2) === 0x002D && - getCharCode(offset + 3) === 0x002D) { - // ... consume them and return a . - type = TYPE$2.CDO; - offset = offset + 4; - } else { - // Otherwise, return a with its value set to the current input code point. - type = TYPE$2.Delim; - offset++; - } - - break; - - // U+0040 COMMERCIAL AT (@) - case 0x0040: - // If the next 3 input code points would start an identifier, ... - if (isIdentifierStart$1(getCharCode(offset + 1), getCharCode(offset + 2), getCharCode(offset + 3))) { - // ... consume a name, create an with its value set to the returned value, and return it. - type = TYPE$2.AtKeyword; - offset = consumeName$1(source, offset + 1); - } else { - // Otherwise, return a with its value set to the current input code point. - type = TYPE$2.Delim; - offset++; - } - - break; - - // U+005B LEFT SQUARE BRACKET ([) - case 0x005B: - // Return a <[-token>. - type = TYPE$2.LeftSquareBracket; - offset++; - break; - - // U+005C REVERSE SOLIDUS (\) - case 0x005C: - // If the input stream starts with a valid escape, ... - if (isValidEscape$2(code, getCharCode(offset + 1))) { - // ... reconsume the current input code point, consume an ident-like token, and return it. - consumeIdentLikeToken(); - } else { - // Otherwise, this is a parse error. Return a with its value set to the current input code point. - type = TYPE$2.Delim; - offset++; - } - break; - - // U+005D RIGHT SQUARE BRACKET (]) - case 0x005D: - // Return a <]-token>. - type = TYPE$2.RightSquareBracket; - offset++; - break; - - // U+007B LEFT CURLY BRACKET ({) - case 0x007B: - // Return a <{-token>. - type = TYPE$2.LeftCurlyBracket; - offset++; - break; - - // U+007D RIGHT CURLY BRACKET (}) - case 0x007D: - // Return a <}-token>. - type = TYPE$2.RightCurlyBracket; - offset++; - break; - - // digit - case charCodeCategory$1.Digit: - // Reconsume the current input code point, consume a numeric token, and return it. - consumeNumericToken(); - break; - - // name-start code point - case charCodeCategory$1.NameStart: - // Reconsume the current input code point, consume an ident-like token, and return it. - consumeIdentLikeToken(); - break; - - // EOF - case charCodeCategory$1.Eof: - // Return an . - break; - - // anything else - default: - // Return a with its value set to the current input code point. - type = TYPE$2.Delim; - offset++; - } - - switch (type) { - case balanceCloseType: - balancePrev = balanceStart & OFFSET_MASK$1; - balanceStart = balance[balancePrev]; - balanceCloseType = balanceStart >> TYPE_SHIFT$1; - balance[tokenCount] = balancePrev; - balance[balancePrev++] = tokenCount; - for (; balancePrev < tokenCount; balancePrev++) { - if (balance[balancePrev] === sourceLength) { - balance[balancePrev] = tokenCount; - } - } - break; - - case TYPE$2.LeftParenthesis: - case TYPE$2.Function: - balance[tokenCount] = balanceStart; - balanceCloseType = TYPE$2.RightParenthesis; - balanceStart = (balanceCloseType << TYPE_SHIFT$1) | tokenCount; - break; - - case TYPE$2.LeftSquareBracket: - balance[tokenCount] = balanceStart; - balanceCloseType = TYPE$2.RightSquareBracket; - balanceStart = (balanceCloseType << TYPE_SHIFT$1) | tokenCount; - break; - - case TYPE$2.LeftCurlyBracket: - balance[tokenCount] = balanceStart; - balanceCloseType = TYPE$2.RightCurlyBracket; - balanceStart = (balanceCloseType << TYPE_SHIFT$1) | tokenCount; - break; - } - - offsetAndType[tokenCount++] = (type << TYPE_SHIFT$1) | offset; - } - - // finalize buffers - offsetAndType[tokenCount] = (TYPE$2.EOF << TYPE_SHIFT$1) | offset; // - balance[tokenCount] = sourceLength; - balance[sourceLength] = sourceLength; // prevents false positive balance match with any token - while (balanceStart !== 0) { - balancePrev = balanceStart & OFFSET_MASK$1; - balanceStart = balance[balancePrev]; - balance[balancePrev] = sourceLength; - } - - // update stream - stream.source = source; - stream.firstCharOffset = start; - stream.offsetAndType = offsetAndType; - stream.tokenCount = tokenCount; - stream.balance = balance; - stream.reset(); - stream.next(); - - return stream; - } - - // extend tokenizer with constants - Object.keys(_const).forEach(function(key) { - tokenize[key] = _const[key]; - }); - - // extend tokenizer with static methods from utils - Object.keys(charCodeDefinitions).forEach(function(key) { - tokenize[key] = charCodeDefinitions[key]; - }); - Object.keys(utils).forEach(function(key) { - tokenize[key] = utils[key]; - }); - - var tokenizer = tokenize; - - var isDigit$2 = tokenizer.isDigit; - var cmpChar$1 = tokenizer.cmpChar; - var TYPE$3 = tokenizer.TYPE; - - var DELIM = TYPE$3.Delim; - var WHITESPACE$1 = TYPE$3.WhiteSpace; - var COMMENT$1 = TYPE$3.Comment; - var IDENT = TYPE$3.Ident; - var NUMBER = TYPE$3.Number; - var DIMENSION = TYPE$3.Dimension; - var PLUSSIGN = 0x002B; // U+002B PLUS SIGN (+) - var HYPHENMINUS$1 = 0x002D; // U+002D HYPHEN-MINUS (-) - var N = 0x006E; // U+006E LATIN SMALL LETTER N (n) - var DISALLOW_SIGN = true; - var ALLOW_SIGN = false; - - function isDelim(token, code) { - return token !== null && token.type === DELIM && token.value.charCodeAt(0) === code; - } - - function skipSC(token, offset, getNextToken) { - while (token !== null && (token.type === WHITESPACE$1 || token.type === COMMENT$1)) { - token = getNextToken(++offset); - } - - return offset; - } - - function checkInteger(token, valueOffset, disallowSign, offset) { - if (!token) { - return 0; - } - - var code = token.value.charCodeAt(valueOffset); - - if (code === PLUSSIGN || code === HYPHENMINUS$1) { - if (disallowSign) { - // Number sign is not allowed - return 0; - } - valueOffset++; - } - - for (; valueOffset < token.value.length; valueOffset++) { - if (!isDigit$2(token.value.charCodeAt(valueOffset))) { - // Integer is expected - return 0; - } - } - - return offset + 1; - } - - // ... - // ... ['+' | '-'] - function consumeB(token, offset_, getNextToken) { - var sign = false; - var offset = skipSC(token, offset_, getNextToken); - - token = getNextToken(offset); - - if (token === null) { - return offset_; - } - - if (token.type !== NUMBER) { - if (isDelim(token, PLUSSIGN) || isDelim(token, HYPHENMINUS$1)) { - sign = true; - offset = skipSC(getNextToken(++offset), offset, getNextToken); - token = getNextToken(offset); - - if (token === null && token.type !== NUMBER) { - return 0; - } - } else { - return offset_; - } - } - - if (!sign) { - var code = token.value.charCodeAt(0); - if (code !== PLUSSIGN && code !== HYPHENMINUS$1) { - // Number sign is expected - return 0; - } - } - - return checkInteger(token, sign ? 0 : 1, sign, offset); - } - - // An+B microsyntax https://www.w3.org/TR/css-syntax-3/#anb - var genericAnPlusB = function anPlusB(token, getNextToken) { - /* eslint-disable brace-style*/ - var offset = 0; - - if (!token) { - return 0; - } - - // - if (token.type === NUMBER) { - return checkInteger(token, 0, ALLOW_SIGN, offset); // b - } - - // -n - // -n - // -n ['+' | '-'] - // -n- - // - else if (token.type === IDENT && token.value.charCodeAt(0) === HYPHENMINUS$1) { - // expect 1st char is N - if (!cmpChar$1(token.value, 1, N)) { - return 0; - } - - switch (token.value.length) { - // -n - // -n - // -n ['+' | '-'] - case 2: - return consumeB(getNextToken(++offset), offset, getNextToken); - - // -n- - case 3: - if (token.value.charCodeAt(2) !== HYPHENMINUS$1) { - return 0; - } - - offset = skipSC(getNextToken(++offset), offset, getNextToken); - token = getNextToken(offset); - - return checkInteger(token, 0, DISALLOW_SIGN, offset); - - // - default: - if (token.value.charCodeAt(2) !== HYPHENMINUS$1) { - return 0; - } - - return checkInteger(token, 3, DISALLOW_SIGN, offset); - } - } - - // '+'? n - // '+'? n - // '+'? n ['+' | '-'] - // '+'? n- - // '+'? - else if (token.type === IDENT || (isDelim(token, PLUSSIGN) && getNextToken(offset + 1).type === IDENT)) { - // just ignore a plus - if (token.type !== IDENT) { - token = getNextToken(++offset); - } - - if (token === null || !cmpChar$1(token.value, 0, N)) { - return 0; - } - - switch (token.value.length) { - // '+'? n - // '+'? n - // '+'? n ['+' | '-'] - case 1: - return consumeB(getNextToken(++offset), offset, getNextToken); - - // '+'? n- - case 2: - if (token.value.charCodeAt(1) !== HYPHENMINUS$1) { - return 0; - } - - offset = skipSC(getNextToken(++offset), offset, getNextToken); - token = getNextToken(offset); - - return checkInteger(token, 0, DISALLOW_SIGN, offset); - - // '+'? - default: - if (token.value.charCodeAt(1) !== HYPHENMINUS$1) { - return 0; - } - - return checkInteger(token, 2, DISALLOW_SIGN, offset); - } - } - - // - // - // - // - // ['+' | '-'] - else if (token.type === DIMENSION) { - var code = token.value.charCodeAt(0); - var sign = code === PLUSSIGN || code === HYPHENMINUS$1 ? 1 : 0; - - for (var i = sign; i < token.value.length; i++) { - if (!isDigit$2(token.value.charCodeAt(i))) { - break; - } - } - - if (i === sign) { - // Integer is expected - return 0; - } - - if (!cmpChar$1(token.value, i, N)) { - return 0; - } - - // - // - // ['+' | '-'] - if (i + 1 === token.value.length) { - return consumeB(getNextToken(++offset), offset, getNextToken); - } else { - if (token.value.charCodeAt(i + 1) !== HYPHENMINUS$1) { - return 0; - } - - // - if (i + 2 === token.value.length) { - offset = skipSC(getNextToken(++offset), offset, getNextToken); - token = getNextToken(offset); - - return checkInteger(token, 0, DISALLOW_SIGN, offset); - } - // - else { - return checkInteger(token, i + 2, DISALLOW_SIGN, offset); - } - } - } - - return 0; - }; - - var isHexDigit$2 = tokenizer.isHexDigit; - var cmpChar$2 = tokenizer.cmpChar; - var TYPE$4 = tokenizer.TYPE; - - var IDENT$1 = TYPE$4.Ident; - var DELIM$1 = TYPE$4.Delim; - var NUMBER$1 = TYPE$4.Number; - var DIMENSION$1 = TYPE$4.Dimension; - var PLUSSIGN$1 = 0x002B; // U+002B PLUS SIGN (+) - var HYPHENMINUS$2 = 0x002D; // U+002D HYPHEN-MINUS (-) - var QUESTIONMARK = 0x003F; // U+003F QUESTION MARK (?) - var U = 0x0075; // U+0075 LATIN SMALL LETTER U (u) - - function isDelim$1(token, code) { - return token !== null && token.type === DELIM$1 && token.value.charCodeAt(0) === code; - } - - function startsWith(token, code) { - return token.value.charCodeAt(0) === code; - } - - function hexSequence(token, offset, allowDash) { - for (var pos = offset, hexlen = 0; pos < token.value.length; pos++) { - var code = token.value.charCodeAt(pos); - - if (code === HYPHENMINUS$2 && allowDash && hexlen !== 0) { - if (hexSequence(token, offset + hexlen + 1, false) > 0) { - return 6; // dissallow following question marks - } - - return 0; // dash at the ending of a hex sequence is not allowed - } - - if (!isHexDigit$2(code)) { - return 0; // not a hex digit - } - - if (++hexlen > 6) { - return 0; // too many hex digits - } } - - return hexlen; - } - - function withQuestionMarkSequence(consumed, length, getNextToken) { - if (!consumed) { - return 0; // nothing consumed - } - - while (isDelim$1(getNextToken(length), QUESTIONMARK)) { - if (++consumed > 6) { - return 0; // too many question marks - } - - length++; - } - - return length; - } - - // https://drafts.csswg.org/css-syntax/#urange - // Informally, the production has three forms: - // U+0001 - // Defines a range consisting of a single code point, in this case the code point "1". - // U+0001-00ff - // Defines a range of codepoints between the first and the second value, in this case - // the range between "1" and "ff" (255 in decimal) inclusive. - // U+00?? - // Defines a range of codepoints where the "?" characters range over all hex digits, - // in this case defining the same as the value U+0000-00ff. - // In each form, a maximum of 6 digits is allowed for each hexadecimal number (if you treat "?" as a hexadecimal digit). - // - // = - // u '+' '?'* | - // u '?'* | - // u '?'* | - // u | - // u | - // u '+' '?'+ - var genericUrange = function urange(token, getNextToken) { - var length = 0; - - // should start with `u` or `U` - if (token === null || token.type !== IDENT$1 || !cmpChar$2(token.value, 0, U)) { - return 0; - } - - token = getNextToken(++length); - if (token === null) { - return 0; - } - - // u '+' '?'* - // u '+' '?'+ - if (isDelim$1(token, PLUSSIGN$1)) { - token = getNextToken(++length); - if (token === null) { - return 0; - } - - if (token.type === IDENT$1) { - // u '+' '?'* - return withQuestionMarkSequence(hexSequence(token, 0, true), ++length, getNextToken); - } - - if (isDelim$1(token, QUESTIONMARK)) { - // u '+' '?'+ - return withQuestionMarkSequence(1, ++length, getNextToken); - } - - // Hex digit or question mark is expected - return 0; - } - - // u '?'* - // u - // u - if (token.type === NUMBER$1) { - if (!startsWith(token, PLUSSIGN$1)) { - return 0; - } - - var consumedHexLength = hexSequence(token, 1, true); - if (consumedHexLength === 0) { - return 0; - } - - token = getNextToken(++length); - if (token === null) { - // u - return length; - } - - if (token.type === DIMENSION$1 || token.type === NUMBER$1) { - // u - // u - if (!startsWith(token, HYPHENMINUS$2) || !hexSequence(token, 1, false)) { - return 0; - } - - return length + 1; - } - - // u '?'* - return withQuestionMarkSequence(consumedHexLength, length, getNextToken); - } - - // u '?'* - if (token.type === DIMENSION$1) { - if (!startsWith(token, PLUSSIGN$1)) { - return 0; - } - - return withQuestionMarkSequence(hexSequence(token, 1, true), ++length, getNextToken); - } - - return 0; - }; - - var isIdentifierStart$2 = tokenizer.isIdentifierStart; - var isHexDigit$3 = tokenizer.isHexDigit; - var isDigit$3 = tokenizer.isDigit; - var cmpStr$3 = tokenizer.cmpStr; - var consumeNumber$2 = tokenizer.consumeNumber; - var TYPE$5 = tokenizer.TYPE; - - - - var cssWideKeywords = ['unset', 'initial', 'inherit']; - var calcFunctionNames = ['calc(', '-moz-calc(', '-webkit-calc(']; - - // https://www.w3.org/TR/css-values-3/#lengths - var LENGTH = { - // absolute length units - 'px': true, - 'mm': true, - 'cm': true, - 'in': true, - 'pt': true, - 'pc': true, - 'q': true, - - // relative length units - 'em': true, - 'ex': true, - 'ch': true, - 'rem': true, - - // viewport-percentage lengths - 'vh': true, - 'vw': true, - 'vmin': true, - 'vmax': true, - 'vm': true - }; - - var ANGLE = { - 'deg': true, - 'grad': true, - 'rad': true, - 'turn': true - }; - - var TIME = { - 's': true, - 'ms': true - }; - - var FREQUENCY = { - 'hz': true, - 'khz': true - }; - - // https://www.w3.org/TR/css-values-3/#resolution (https://drafts.csswg.org/css-values/#resolution) - var RESOLUTION = { - 'dpi': true, - 'dpcm': true, - 'dppx': true, - 'x': true // https://github.com/w3c/csswg-drafts/issues/461 - }; - - // https://drafts.csswg.org/css-grid/#fr-unit - var FLEX = { - 'fr': true - }; - - // https://www.w3.org/TR/css3-speech/#mixing-props-voice-volume - var DECIBEL = { - 'db': true - }; - - // https://www.w3.org/TR/css3-speech/#voice-props-voice-pitch - var SEMITONES = { - 'st': true - }; - - // safe char code getter - function charCode(str, index) { - return index < str.length ? str.charCodeAt(index) : 0; - } - - function eqStr(actual, expected) { - return cmpStr$3(actual, 0, actual.length, expected); - } - - function eqStrAny(actual, expected) { - for (var i = 0; i < expected.length; i++) { - if (eqStr(actual, expected[i])) { - return true; - } - } - - return false; - } - - // IE postfix hack, i.e. 123\0 or 123px\9 - function isPostfixIeHack(str, offset) { - if (offset !== str.length - 2) { - return false; - } - - return ( - str.charCodeAt(offset) === 0x005C && // U+005C REVERSE SOLIDUS (\) - isDigit$3(str.charCodeAt(offset + 1)) - ); - } - - function outOfRange(opts, value, numEnd) { - if (opts && opts.type === 'Range') { - var num = Number( - numEnd !== undefined && numEnd !== value.length - ? value.substr(0, numEnd) - : value - ); - - if (isNaN(num)) { - return true; - } - - if (opts.min !== null && num < opts.min) { - return true; - } - - if (opts.max !== null && num > opts.max) { - return true; - } - } - - return false; - } - - function consumeFunction(token, getNextToken) { - var startIdx = token.index; - var length = 0; - - // balanced token consuming - do { - length++; - - if (token.balance <= startIdx) { - break; - } - } while (token = getNextToken(length)); - - return length; - } - - // TODO: implement - // can be used wherever , , ,