Honda Insight Forum banner

Contemporary musing on Insight pack and management

18K views 106 replies 7 participants last post by  eq1  
#1 ·
It's been about a year and a half since I installed what I said would be the last Insight NiMH pack I'd work on/with, having gotten tired of it all. But, this original pack in its reconditioned state has been working awfully well and has been producing some surprising results. It works in ways that I never thought possible with Insight NiMH. That's mainly why I'm here, one particular aspect, that makes we wonder about the car's stock management and the way the pack - a good pack - should actually perform...

It's all kind of complicated so I'll try to stay as focused as possible on only the 'main thing'. That main thing is this: I can use the pack at what seems like an historically low state of charge, get full assist, and have voltage remain high and stable despite that low state of charge, cool temps, and relatively high assist discharge... This, plus what I've seemingly done to achieve this, plus what appears to be improvement over time/usage, makes me seriously question whether the stock pack management has a major flaw, a flaw that causes deterioration...

In a nut shell, I finagle pack management with the OBDIIC&C and calpod clutch switch, basically forcing usage of the pack in a relatively low charge state. This has allowed me to get a better read on what happens 'down there'. You gotta do this 'finagling' most of the time, otherwise, the car will background charge and such and keep the pack at a higher charge state.

The thing is, if you do this, the pack - my pack, at least - will fairly quickly... 'acclimate' at the lower charge state and it will start looking and performing just as it does at the higher charge state... I'm mainly talking about voltage. If you start at a high-ish charge state and discharge (use assist) for a long stretch, pack voltage will drop and stay pretty low, say around 152V instead of around 161V. The car will background charge. But, if you purposely try to use the pack in that low charge state, doing assist and regen around say 40% state of charge, the voltage will fairly quickly come back up - it will gravitate toward about 161V, not that lower 152V level that it started at...

Say you discharge from 70% to 45%. Pack starts at 164V at rest and ends at 152V at rest. The car tries to background charge but you hit your calpod switch instead, preventing it. You purposely use the pack at around 40% to 50% for a few up-down cycles. Soon, pack voltage will pop back up to around 161V - at that 45% charge state - not the 152V you started at...

Now this, I think, is the crux of the potential stock mismanagement. It appears that background charging at least sometimes responds to voltage, most likely tap voltage. The problem with this, it seems, is that there's too much 'hysteresis' in the voltage behavior... It seems pretty clear that if I force pack usage at progressively lower charge states, letting voltage 'acclimate'/adjust or whatever each time, background charging won't kick in as soon and/or as often as it otherwise would. It really looks like the car is using tap voltage at least in part to decide when to background charge: on the first cycle down, say to 45%, voltage will drop low, the car tries to background charge, I stop it with the calpod. I use the pack around the 40% level and voltage pops up. After voltage pops back up, the car doesn't try to background charge as often... But that wouldn't happen if I didn't manually, purposely manipulate the car with the calpod, turning off the background charge and using the pack at a low charge state. Rather, the car would see the first low-ish voltage, the low-ish voltage on the first cycle down, and immediately start to background charge, charging until say 70% nominal. And it will keep doing this over and over, never letting the pack get used in even modestly low charge states...

The stock management appears to charge too much, too often, keeping charge state too high too much. I think this ends up 'crudding up' the lower charge states, makes them even less usable - higher resistance, voltage depression, whatever you want to call it.

I'm seeing something like 144-145V at about 20-25 amp discharge, at 62F, at around 40% state of charge. I can also invoke full assist and get it - somewhere around 70-80 amps at 125V, with these conditions... I used to think the BCM charged the pack to around 70-75% because the pack could only perform well, like put out full assist, when the pack was charged above say 60%. I don't see it that way at all, now... Being able to use the pack like the numbers above describe, to me used to be unheard of, it was impossible.

If you've ever seen discharge curves (voltage vs. time or capacity) for Insight NiMH cells, good ones are really quite flat over most of the discharge, while bad ones, mainly ones that have whatever kind of voltage depression/memory effect, crud, whatever, will crater early on, typically well before the mid point. It's this area of the capacity - the area between the mid point and the end - that, over time, gets used less and less in the Insight. And I think it's a negative feedback loop, too: the less this area gets used, the more deteriorated it becomes - and the less it will be used on the next iteration, and so on. But, I think, if you force usage at low charge states, I think it might keep the 'curve' high, keep it from cratering early. And then, the car allows usage of that area on its own more and more. To a point, at least (I think there's hard-coded behaviors, like fixed charge state percentages, that trigger stuff. For instance, at about 38% nominal, I don't think you get as much assist, regardless of voltage)...
 
#2 ·
Your forcing your well balanced pack into the lower soc area which reactivates dormant material.

That's why deep discharging with lightbulbs etc works as well. It's basically the same thing but because of the low currents can be used with poorly balanced packs with a smaller risk of damage..

In the UK cvt rally car the stock battery always performed very well when given a real hammering and used until it was empty and then charged with tickover regen until full.

Exercising the pack over the full range is definitely beneficial.. Same for humans.. :)
 
#3 ·
Your forcing your well balanced pack into the lower soc area which reactivates dormant material. That's why deep discharging with lightbulbs etc works as well. It's basically the same thing but because of the low currents can be used with poorly balanced packs with a smaller risk of damage.
hmm, I don't think it's the same process/mechanism as low current deep discharge, though it could be similar and/or analogous in ways... The problems each method addresses - low current deep discharge and what I'm describing, just forcing usage of relatively lower charge state areas - seem different. What I'm describing seems to address 'damage' or 'dysfunction' or simply normal operation that's not carefully dealt with by the BCM. Deep discharge addresses longer term, deeper seated deterioration...

I'm suggesting that usage of even normally functional Insight NiMH cells includes 'hysteresis' that the BCM doesn't adequately handle or consider. Either that or some form of deterioration sets in very very quickly - that the BCM can't deal with appropriately... I'd guess it's a combination of the two. (Or, I guess it could be that my pack is old and newer packs don't behave like this - which I don't know since I've never had a new pack - and Honda just figured packs don't need to last forever)...

Over time, the "deeper seated" deterioration sets in - and that's when low current, I would say ultra deep discharge, is needed. The electro-chemistry that happens with that isn't the same as what would happen with simply using the pack at normal charge and discharge rates at merely somewhat lower charge states than the car usually sticks to. I mean, around 40% charge state isn't that low; it's simply lower than what the car normally pushes toward... But, it IS in a range - roughly below 50% - that seems to get 'eaten up' quickly in normal day to day operation. I've discharged sticks/cells after various forms of partial usage (i.e. not full cycles) and it seems par for the course that discharge curves will get depressed (voltage sag over the second half of the curve, mid point voltage happening earlier and earlier in the discharge)...

I'm not actually sure whether using the pack at lower charge states will preserve a normal, flatter, loftier discharge curve. I get the curve normal through ultra deep discharge; I'm thinking using the pack at lower than normal charge states will help keep it normal. I've been keeping it low for about half a year or so at this point, but it wasn't until now that I realized just how far I could take it, or rather, how much farther I could take it than I had been (I'm still not sure how low I can go; 40% seems about the limit, as 38% seems to have those 'hard-coded' nominal triggers)...

Either that or it's simply an interesting test: if one uses the pack around the 40% level day to day, rather than around 70%, without problems, without recals, etc., it's like you know your pack is good - because you can always let it charge up to 75% and - boom - you have another 35% of capacity to burn. You can purposely make the car use that low range, knowing you're on the edge, and it's easier to spot when or if failure starts to happen, too... In other words, I'm fairly sure that normal operation means a discharge curve that sags more and more, day by day. The BCM will force usage at higher and higher charge states - and that makes the problem even worse. If you can continue to use the pack around 40% day to day, you know your pack isn't suffering the degradation that heretofore has been par for the course - because if it were, you wouldn't be operating at 40%, you'd be getting neg recals. Or, you'd simply see voltage getting lower and lower, background charge kicking in incessantly, lower discharge currents, etc. etc...
 
#6 ·
I mentioned earlier that I wasn't sure just how low I could take charge state while still being able to bring voltage back up to the levels it was at at high charge states, due to 'hard-coded' stuff. I experimented more with that and it looks like around 36% or so, at least with my 305 BCM, is the limit. I could still get voltage back up to normal high levels (around 158 to 163V or so, down around 29% to 39%), but despite that, background charge wouldn't go away on its own, and more importantly, assist was throttled.

Basically, at around 36% it's very difficult to invoke assist - it takes much more throttle, so ICE is coming into play way more, plus the assist you get is feeble. I couldn't get full blown full assist - about 50 amps was the highest, at full throttle, despite a high voltage. Partial throttle assist would kick in at around 10-15 amps only, rather than around 20+ amps (it's very easy to invoke assist in 4th gear and hold a constant 20 amp discharge or so, with even very light throttle, while pack voltage hovers around 144V, so I'm very familiar with what usually happens)...

I recall it's been mentioned that different BCMs behave differently at this low end; I recall Eli at Bumblebee mentioning that something like the older BCMs don't throttle assist much if at all at the bottom. I'm tempted to install my really old BCM, I think it's an 010 with a really low serial number (that secondary number/code that the OBDIIC&C reads)...

One other thing I didn't mention that I'm wondering about, that I haven't really verified - is whether 'the top end' will actually fill-up normally after a lot of use at the low end. Earlier I said that if you're operating at the low end you always know you've got all that top end at your disposal - 'just charge back up and - boom - you've got another 35%' of capacity. But truthfully, when I go to charge up I could get 'premature pos recal' - a pos recal to 75% well before the true 75% mark.

In my head I'm envisioning that usage at the low end is preserving the flatness of the discharge curve, the loft in the mid point to lower part of the curve. When you charge ultra deep discharged cells (or deeply self discharged cells), voltage always goes high almost from the start and stays high. If you charge a cell or stick that's been used to some degree, voltage stays pretty low when you charge from empty and only gradually gets high. I've always wondered what that's all about. Seems like something closer to the 'charge after ultra deep discharge' pattern would be, better.

That's one thing I've seen during my low end experimentation, too: the more I've done this, the easier voltage seems to pop back up to the chemistry's equilibrium voltage (something like 1.318V per cell, so 158.16V for the pack)... Rather than seeing a somewhat unusual response to normal cells, it does seem like I could be seeing a somewhat unusual response to cells that are undergoing some kind of reconditioning... It just seems odd that it'd take this kind of use, after a year and a half of more or less normal use and some reconditioning stuff, to actually see a difference. I guess the main question will be whether it actually sticks; I've seen similar behavior before - not as clear cut and extreme, but similar nevertheless - but it has always been more or less transient. You drive, you use the pack, it warms up, cells 'get primed', they perform better. The next day you'd have to go through this whole usage scenario again to see the higher-level performance. Here/now I'm thinking the higher-level performance at such low charge states might actually stick around. What's more, I've seen higher performance at higher charge states as well - voltage stays higher longer, and more stable... Instead of seeing, say 145V at 20-25 amps at 70 degrees F, I might see 152V under the same conditions... Or even higher.

The basic question/scenario here has to do with that voltage hysteresis and whether different usage and reconditioning can produce a more favorable...voltage response. For instance, instead of voltage sagging and staying low after a relatively long assist event, voltage might stay higher longer and pop back up faster when it does drop low under load... I've described the way Insight NiMH works as being like a moving window - the usable capacity window moves around depending on where the NiOOH/NiOH2 interface is positioned in the cell, and its shape changes depending on - something, perhaps the 'cleanness' of the ionic pathways, the amount of electrolyte in the vicinity, et al. I wonder whether different usage can grow this window - make it bigger, make it stick around longer. It kind of seems like something like that must be happening when I'm seeing voltage stay higher longer but at lower charge states, and when I'm seeing higher voltages under load at higher charge states...

I'm pretty much a believer in what I'll call the 'mowing lawn theory of Insight NiMH', kind of what Peter described above. If you leave an area of your lawn un-mowed, it ends up growing in ways way different from the rest that you did mow. You gotta mow the whole thing if you want it to look the same. It's kind of the same way with Insight NiMH: if you want the cell to perform similarly over the whole charge state range, you gotta use the whole cell. Otherwise, areas of it will grow in different ways and end up performing differently... And this gets us back to why the stock battery management might be seriously flawed: one flaw might be that it concentrates usage in a very small charge state area (roughly a 10% window around, well, nominally 65% to 75%, but I think that tends to shift higher with time and degradation, so it ends up being around 75% to 85%. On the other hand, '75%' and '85%' - these values lose almost all meaning the way the stock management works, the way usage under this management slowly shrinks total capacity and concentrates usage higher and higher)...

To this there's just one bit I'll add: after this kind of degradation, the more serious problem might become uneven self discharge, at which point usage will be concentrated lower and lower, not higher. The higher charge states will languish/degrade because they're never used. Total capacity shrinks because you've got at least one cell that drains faster and hits the low voltage cutoff first and early, and then the other cells don't charge up as much because the high charge state degradation means high resistance, higher voltage, and hitting the upper voltage cutoff too soon...
 
#8 · (Edited)
^ Nah, I don't think you're understanding what I'm doing, what I'm describing. I don't blame you. I've already worked with this pack 'reconditioning'. I've seen what each stick does on the bench, I've examined each cell, etc...

[Later...] I think the most important aspect here is that, regardless of what reconditioning I've done, the pack is performing at a higher level only after I've circumvented the car's stock pack management. This is positive behavior/results that I would otherwise not be seeing.

The take away for lay folk is that they should probably be trying to use the pack at lower charge states, rather than letting the car keep charging the pack over and over again to high charge states and keeping usage within a narrow, high charge state window.

My sense is that stock battery management should have been engineered to use the pack in the middle, with cycling dipping down and peaking up from the middle, not perpetually focused in the higher 65%+ range... I've done a lot to this pack (and others) and I've driven with it with various usage patterns. But I've never forced such extended low usage before - and I've never seen the degree of improvement I'm seeing. Is it a coincidence - use pack at low charge state > see unprecedented improvement? Possible, but I'm pretty sure it's not...

Granted, this low usage follows on the heals of a tap-level ultra deep discharge, but I've done that before and I didn't see the degree of improvement I'm seeing now... Maybe the ultra deep discharge plus forced low charge state usage are working together... I mean, they do to a certain extent, as the ultra deep discharge is partially what allows the pack to be used so low in the first place. But I imagine whatever conditioning is happening by forced low charge state usage would take place whether one's pack were ultra deep discharged or not. Maybe. Or the flip side, whatever 'conditioning' (i.e. degradation) is happening normally with the car forcing usage in the high charge state window should be undone with intentional pack usage in the low charge state... I'm really leaning toward this side. It's not so much adding a positive - some 'reconditioning' - but rather, it's undoing a negative - circumventing the concentrated high charge state usage that the BCM imposes...
 
#9 ·
I was skimming through an old Panasonic NiMH Technical Handbook and came across a rather matter-of-fact statement - something I've seen before yet never fully appreciated - that supports the notion, explained above, that Honda's battery management is most likely flawed, maybe seriously so. If true it would explain why my pack performance continues to shine - as long as I manually thwart OEM management and force low charge state usage...

Pretty sure most BCMs don't allow discharging below about 132 volts, or 1.1V per cell (13.2V tap level). Background charge/forced charging usually kicks-in well before that or, if assist is in play, assist throttling ends up maintaining about 13.2V. If you go full throttle assist you can drop voltage to 120V/12V, but in general, the level trying to be maintained is 132V/13.2V. The problem with this is that discharge voltage characteristics suffer - you end up with voltage sag - if cells aren't discharged low enough. Panasonic says they should be discharged to 1V... This is age-old stuff - it's 'memory effect' and 'voltage depression', but here it is, plain as day, in Panasonic's own words:

"Discharge characteristics
The discharge characteristics of Nickel-metal hydride batteries are affected by current, temperature, etc., and discharge voltage characteristics are flat at 1.2V, which is almost the same as Ni-Cd batteries. Discharge voltage and discharge efficiency decrease proportional to current rise or temperature drop. As with Ni-Cd batteries, repeated charge and discharge under high discharge cutoff voltage conditions (more than 1.1V per cell) causes a drop in the discharge voltage, which is sometimes accompanied by a simultaneous drop in capacity. Normal discharge characteristics can be restored by charge and discharge to a discharge end voltage down to 1.0V per cell."

Certainly over time and usage our packs degrade due to this: Packs usually only get discharged to the equivalent of 1.1V per cell, cells never see this 'Panasonic restoration voltage' of 1V. It'd actually be worse if you drove at night all the time, where background charge kicks-in at a high charge state, or if you don't use much assist. I also think it can happen very quickly, though, perhaps more so or only with older packs...

Again, the takeaway is that, if you want your pack to last and/or to work better, you should be trying to force low charge state usage. Initially this may require that you do some kind of deep discharge/grid charge, because if your pack is imbalanced or too crudded-up you won't be able to take it low in the first place. But after that, I think something like, at minimum, a weekly low charge state usage drive, is needed to maintain pack performance.

Here's a link to a post that nicely sums-up the more technical, electro-chemical explanations for all this: https://www.insightcentral.net/thre....net/threads/tinkering-with-a-non-working-ima-battery.81778/page-11#post-912394
 
#11 ·
I was skimming through an old Panasonic NiMH Technical Handbook and came across a rather matter-of-fact statement -

"Discharge characteristics
The discharge characteristics of Nickel-metal hydride batteries are affected by current, temperature, etc., and discharge voltage characteristics are flat at 1.2V, which is almost the same as Ni-Cd batteries. Discharge voltage and discharge efficiency decrease proportional to current rise or temperature drop. As with Ni-Cd batteries, repeated charge and discharge under high discharge cutoff voltage conditions (more than 1.1V per cell) causes a drop in the discharge voltage, which is sometimes accompanied by a simultaneous drop in capacity. Normal discharge characteristics can be restored by charge and discharge to a discharge end voltage down to 1.0V per cell."
I've read this entire thread a couple of times trying to decipher how your routine differs from deep discharging during normal restorative grid cycling. (Brevity isn't your strong suit;).) It looks like the fundamental difference is in the time that the battey spends in a low voltage state. Might one interpret this as an indication it takes more time than we normally allow with cycling to re energize the parts of the battery material lying dormant???
 
#10 ·
I'm still thinking about low charge state usage, somewhat experimenting with it. I've had a cell with slightly faster self discharge in my main pack, and when I let the pack sit for a month I ended up having to deal with that. I basically pulled the offending stick, balanced the cells, and put it back in. But I haven't done anything to the rest of the pack, besides in-car 'manual methods'. I used the pack at high charge states to take advantage of ways the pack can be balanced in the car, and I've mainly been using the pack just above the neg recal point. This 'just above neg recal point' usage is what's on my mind...

I continue to be blown away by just how well a pack can work at extremely low charge state. What's more, I'm becoming increasingly convinced of how a much more regimented battery management could wring-out way more use of the OEM NiMH packs. Part of this is my growing awareness of just how little capacity one needs for like 95% of the driving we do... A super-tightly-controlled battery management could pretty easily get away with 10% of the stock capacity - so 650mAh. I'd say 20%, or about 1300mAh, is all we'd really need - if the cells could take it.

Most of the usage should be handled as if the battery were more like a capacitor bank. I'm thinking Honda had that kind of usage in mind -- that most of the capacity of the stock pack is there simply as a buffer, to handle high power demands, especially under deteriorated or sub-optimal conditions, and for longevity. As packs deteriorate over time and usage, demands remain modest and so the excess 'buffer' capacity can come into play and handle it (provided the deterioration isn't the debilitating kind, which it wouldn't be if the battery management were appropriate).

You might use a few hundred mAh per assist pop, like 80% of IMA usage; another 15% might be of the slightly longer duration, like up to 10% of capacity, so 650mAh. The remaining 5% of usage might be of the more or less unnecessary, highway incline, longer duration assist -- the kind where you get very little benefit, you can simply downshift and maintain speed and mpg... I think not realizing this stuff is a major misconception: I think we naturally tend to think of the pack as a reserve for extended assist, when in reality it's intended to provide short-term boost and little more...

The flip side of this is regen: using the pack at low charge states helps to maximize regen. I think stock management neglects regen: with charge state maintained high, regen is often throttled and is less efficient. Plus, it probably places more of a burden on the cells, where oxidizing conditions are more severe and you have a greater chance for unwanted reactions...

One general pattern of a better battery management should be maintaining at most a 20% usable capacity. It should be low enough to allow max regen. And I'm tending to think most of the time it should be low charge state, not high middle or high like it is in OEM form. Low charge state usage allows the cells to drop to low voltages, such as that 1V level mentioned in the Panasonic excerpt I posted earlier. As long as the cells are reasonably balanced, I'm not seeing any problems posed by cells dropping low; on the contrary, almost all my work shows a palliative effect from dropping voltage low...

Here's a graph I posted in another thread (or 2) that shows what low voltage usage does for a cell's discharge voltage curve. The yellow curve was a fast self discharge cell, so it was getting dropped to a lower voltage than other cells. Look how high the voltage curve is compared to the other (black) curve. If ALL the cells had such lofty curves, it would be like adding another 18 volts to the pack. There's a more detailed explanation at the link below the graph.
Image


I'm not positive, but I'm pretty sure my low charge state usage - at this point hovering just above empty for some days - has lifted the 'black cells' closer to the level of the yellow one.

In conclusion (as they say), a better battery management would be able to maintain the lofty yellow curve (but of course across the whole charge state range); it wouldn't let cells get like the black curve. At any given time we want our cells optimized to put out the most power and take in the most power, not put out low power over a long time. I think OEM management isn't quite attuned to making this happen. It doesn't go low enough, it goes too high too often, and in the process cells end up deteriorating.
 
#15 ·
These are good musings. I'm having a hard time following this, mostly because of sleep deprivation. Some thoughts:
I think we naturally tend to think of the pack as a reserve for extended assist, when in reality it's intended to provide short-term boost and little more...
My CVT would agree with you. It seems to stay at lower RPMs only when the load is low. It seems to run at a much higher RPM (with proportionally less assist) than I would run the manual when I'm doing anything other than just tooling along a flat road at 35 MPH.
1) by doing taps and using a tiny load, you can better equalize the discharge characteristics each cell sees, such as the amount of discharge and the deepness, and 2) going deeper and slower is more thorough.
This may be correct, but I think the next statement is more important, discussing the "light bulb" full pack discharge:
the cell gets driven too low, even reversed, and gets held there. If the unevenness is too severe, the reversal can mess up the negative electrode...
Yeah! The first cell to reverse is going to reverse while the pack still has a lot of voltage! So you might drive, what, an amp or more through that cell, reversed? I know for a fact (through an experiment earlier this year) that a good way to kill a weak cell fast is to put a lot of current through it reversed.

I want to know more about when the BCM decides to kill assist - how close did a cell in a stick get to reversal? Does the "canary cell" get repeatedly insulted (damaged) just a little bit from too-low discharge and does this eventually accumulate on that particular cell?
Typically, full pack methods use higher currents, too, so the 10% cell endures a higher current and the above is even worse.
As will the next few cells that reverse after it, using the "light bulb discharge" method. Maybe with the ECM, only the worst cell gets insulted before the BCM unloads the pack. So it makes me nervous to hook up a light bulb to a pack and walk away.

I would really prefer making a jig that conditions a pack at the stick level or even the cell level, (high rate until cell voltage drops to a certain point, then reducing current appropiately) but I don't know where I'm going to find the time to build this. An LTO conversion seems to be a better investment in time/money, unless one builds the conditioner out to handle max one stick at a time, and has a spare pack to run while the other is torn apart and being conditioned.
 
#13 ·
In this thread, you were talking about operating the pack in a lower state of charge than is commanded by the normal battery management routines in the car. So, last half of you post above applies to my question.
 
#14 · (Edited)
^ I see. But ultimately I think they both apply. If you don't start with balance you don't end up using low charge state -- some cells will be low, others higher. That's probably the number one problem in the car: the car doesn't allow much discharge below 132V, or 1.1V per cell. But that's only if ALL the cells are balanced.* In the likely typical pack condition, cells are undoubtedly at least somewhat uneven, where a few cells might get down to 1.1V, while all others are above say 1.2V. And then more imbalance sets in, and it's a vicious cycle...

*It's actually even worse -- because even to get down to a mere 1.1V (132V) you have to purposely 'go there'. Most Insight drivers probably rarely if ever get their packs much below 140V, no fault of their own, as you'd expect the battery management to do the proper thing. But I don't think it does.
 
#17 ·
I was re-reading this thread and thinking about the deeper electro-chemical processes taking place, which can explain why I'm seeing what I've been seeing. One thing I've always struggled with is a sort of 'spatialization' of the electro-chemistry, conceptualizing the spatial arrangement in the cell of the electro-chemistry going on. But, I think I'm understanding something more clearly, now...

When a cell charges, nickel-hyrdoxide Ni(OH)2 is converted to nickel-oxyhyrdroxide (NiOOH). When it discharges NiOOH is converted to Ni(OH2). Various stuff can happen that obstructs these basic, core reactions. For example, one of the main forms of degradation - of 'obstruction' - is the growth of large nickel-hydroxide crystals: reactions take place on the surface of crystals, so if you have larger crystals you have less surface area where the reactions can take place. The result is higher resistance, smaller usable capacity, more voltage sag, etc. Deep/ultra-deep discharging can shrink the crystals and fix these things. There's also other similar problems that discharging helps similarly.

But, what I've never been able to understand is how different discharge currents and different end voltages upon discharge can all offer improvements, and not necessarily in a hierarchical order. For example, in this thread I'm talking about improvements based primarily on using lower charge state in the car. Well, that's happening at relatively high currents and not-all-that-low voltages. What's more, I've already ultra-deep discharged these cells, at super low current: If I've already ultra-deep discharged these cells, why would I continue to see improvement from these higher current not-so-low discharges in the car?...

Well, here's the thing (and I really need to bang this out before it slips away): When you don't use low charge state, reactions are always taking place on the low hanging fruit. You have a lot of capacity, a lot of active material, at your disposal, so there's no reason for the reactions to take place anywhere but on that low hanging fruit, i.e. on the surface of whatever 'crystals' are around. Some are big, some are small, but there's enough of it in this mixed form, so why bother with the stuff that's harder to reach?. I.e. Why break down larger crystals into smaller ones if you don't have to?

If charge state is usually high, there's enough active material to support the loads demanded, simply by virtue of there being more material charged, almost in a brute force sense. When you deliberately take the cells low (or if the management did it from the start), you perpetually force the electro-chemistry to 'reach deeper' into the material - there simply isn't a lot of low hanging fruit to pick. Larger crystals, for instance, are broken up into smaller ones (and then these smaller crystals can support higher currents)...

High charge state use is like a mile wide and an inch deep, growing wider and less deep day by day; low charge state use is like perpetually tilling deeper and deeper, forcing reactions to happen in the hardest to reach 'places'. What you end up with is a cell that's deeply reconditioned, optimized...

I have to leave it there. There's a lot of loose ends and it's all pretty nebulous, a lot of analogizing. But it will have to do for now.
 
#18 · (Edited)
I've been thinking about a couple different things based on my most recent IMA/pack 'testing' and general driving observations, and stuff. One is my 'moving window theory' of charge state and the other has to do with top-end self discharge.

'Moving Window Theory' of Charge State Usage
I'm thinking about the moving window theory in the wake of rock-bottom charge state usage, after which I have charged the pack up, used it (seemingly) high, and I'm not really seeing what I would have expected were a 'moving window theory' of charge state not in play... I only want to briefly mention this as a sort of place-holder, create the simplest of sketches, the outlines of this "theory"...

In a nut shell, I'm thinking cells can function well, perform at a high level, within any given modestly sized charge state range, around any given charge state level, so long as the most recent charge state usage is within that range. For example, you can have high performance within a charge state window equal to say 20 percentage points of capacity, centered on true charge state of say 15%, as long as you continually(?), repeatedly(?) use the charge state between 15% plus or minus 10 points, so 5% to 25%. OR it can be a range of 20 points centered on say 60% true charge state, so 50% to 70%. But, at least with old, used cells, it sort of looks like you can't have equally high performance across the full charge state range at all times, at any given moment...

If you use predominantly low charge state, the high charge state will languish; if you use predominantly high charge state, the low charge state will languish. There do seem to be differences, and perhaps different risks and/or deterioration modes with low vs. high usage, but in general, this is what it looks like to me...

Currently I'm thinking something like this: with old, used cells, you can end up with an imbalanced amount of 'active materials'. In theory a perfect cell would have all 'active materials' in perfect balance, perfect ratios, to sustain full performance at any charge state at any time. But over time and usage, you can end up with a deficiency - say in water in the electrolyte, or maybe nickel-hydroxide - or whatever - so you don't get full performance over the entire charge state range.

BUT, you CAN exercise a limited range and all the active materials needed for full performance end up, I don't know, coalescing, aggregating, orienting in such a way that full performance within that range is perfectly possible. That's partially what I mean when I say "moving window." If you use charge state in the 50% to 70% range, repeatedly, all the stuff you need eventually gravitates to 'that area' and you can get big performance. But if you were to immediately cycle down to the low range, you wouldn't get the same good performance. Conversely, if you use the 5% to 25% range, all the stuff gravitates to 'that area' and you get full performance - but you wouldn't be able to cycle up to the higher range and immediately get the same performance...

This is what it seems like I'm seeing, post rock-bottom charge state usage and subsequent cycling up. I haven't been able to charge as high as I 'should' be able to, so it seems (it can be hard to tell the true charge state, but I'm pretty certain this is the case). After my low charge state usage, the first pos recal happened seemingly prematurely. After a few days I've been able to add about another 15 points of charge. The couple times I have gone back low I wasn't seeing the phenomenal performance I was seeing earlier, when I was using low charge state range pretty much exclusively. Hence, the 'moving window theory' of charge state.

Top-end Self-discharge
This was really what I wanted to focus on today, but this is taking too long. So, I'll just throw out a sketch here and try to get back to details later.

When you charge to high charge state with the Insight cells, or really any NiMH cells, there's faster self discharge in the upper range than the middle and lower. That's what "top-end self-discharge" refers to. I was skimming a bit of an advanced battery book and was reminded of stuff that makes what's going on pretty clear. From a practical standpoint - what may be useful to Insight drivers - I was thinking that 'pulling away' from the top, discharging a bit off the top when you park, especially if you're going to have the car sitting for more than a couple days, would probably be a good tactic to preserve cell-to-cell charge balance.

Over the last few months or so, I've come to believe that uneven self discharge is likely the most pernicious form of deterioration when it comes to our packs. Other forms of degradation seem surmountable; uneven self discharge doesn't seem surmountable. It's at least not fixable, I know of nothing that can fix a fast self discharge cell. In my work putzing with pack reconditioning and such, I'm pretty sure that uneven self discharge was the cause of every ultimate failure.

Now, assuming this is the case, that uneven self discharge, especially when it's uneven and fast, can make our packs unusable, a way to mitigate the impact is important. So that's where the 'pulling back from the top' factors in. If you're using the pack at the top of the car's usable range (which seems par for the course, due to how the BCM operates), you're probably losing a lot of charge to self discharge (SD). And, if SD rates are uneven, then you're inducing a lot of imbalance. I'd guess you could easily end up with about a point or two of imbalance just from uneven self discharge rates in the top charge state range... So, the answer is simply don't leave the car with the pack at a high charge state.

I'm guessing that discharging only a few percentage points off the top, before you park, is all that it would take. In general, I usually see a pack resting voltage above about 168V after a pos recal. That voltage will drop to something like 163V after a day or so. I'm pretty sure this voltage drop is or can be a direct correlate of the amount and/or rate of self discharge. On the other hand, if you leave the pack at, say 163V, you'll usually come back to it and the voltage will be say 163V. So, basically, I think you can preserve cell-to-cell balance, you can thwart uneven self discharge rates, if you discharge enough that resting voltage when you park the car is somewhere around 163V or lower... I think it could easily be the difference between maybe a one or two point net imbalance per day or not.

Let me take a quick look at an old dataset I have, that includes self discharge rates...

So, at a glance, a cell or two known to have faster self discharge have rates that are about double what the other cells are - double during the first few days of sitting after a full charge (the rates even out after some days). These rates are of voltage change not capacity loss. I'm gonna eye-ball it and say the rates are about 3% per day for the fast SD cell vs. 1.5% per day for the others. I also calculated the average capacity loss in mAh per mV change in voltage, based on discharges, and came up with something like 13 mAh per mV. This is all not perfectly comparable, but we just need some rough, back-of-the-envelope stuff... I also have a figure of about 50mAh loss per day for normal cells vs. about 90mAh per day for the fast self discharge cells... I think this is probably a good ballpark figure to use - call it about 50mAh loss per day due to faster self discharge. 50 mAh/ 6500mAh= about 1% loss per day. And it'd be even a bigger share of only the usable capacity.

So, discharging off the top might be the difference between avoiding about a 1% capacity imbalance per day or not. More like 2-3% per day when we consider usable capacity. And this is for a pretty modest uneveness in self discharge rates. Think about it: the risk is losing 1% per day - so after 30 (successive) days of drive, park, drive, park, you could potentially lose up to 30% of capacity in one cell relative to other cells. If cells started at say 3200 mAh of usable capacity, the fast SD cell after 30 days is now at 3200 minus (0.3 X 6500) = 1300mAh when 'fully car charged'...

Anyway, I think this is in the ballpark. If you have a 'weak' pack, try discharging off the top a bit before you park. Even if you don't have a weak pack, I think it's probably a good tactic to help maintain it. I'll try to get back to this some other time and clean it up, fill in some details, etc.

One thing I need to add before I forget:
When I talk about top-end self discharge, one question that pops up is: But what exactly is the "top-end"? Above, when I talk about top-end self discharge, I talk about the 'top' as if it were an absolute range of charge state, say above 75%. But when I talk about the moving window theory, the top can actually be anywhere, really. This is confusing.

I'm pretty sure that the "top-end" in "top-end self discharge" can be the top of the moving window, while the moving window can be anywhere within the absolute 0-6500mAh, 0-100% charge state range afforded by the raw capacity of the cells (or however much is left in old, used cells). You can see greater than, say, 163V resting voltages even at a low absolute charge state. If you're using the 5% to 25% absolute range, the 'top-end' will be around 20-25%, that is, you'll see resting voltages well above 163V at 25% if you charge from 5% to 25%. And I think the top-end self discharge idea applies here as well, even though you're at an absolute low charge state.

There's probably differences, it's probably not as bad as it would be were the 'moving window' at an absolute high charge state. But I think the mechanisms for faster self discharge are still there, at the low absolute state but at the top of the moving charge state window.

So basically, so long as pack voltage is above about 163V, you'd need to discharge a bit 'off the top' if you want to slow down and even-out cell-to-cell self discharge.
 
#19 ·
I've been re-reading a chapter from the old 'Huggins' advanced battery book, trying to piece together info about NiMH electrochemistry that supports the top-end self-discharge idea above, or at least provides some of the underpinnings. Yesterday I said I skimmed some of that and it made "what's going on pretty clear." I don't have the patience to quote passages and go line by line, but I figured I should try to get something down...

The gist of it is this: When the positive electrode is fully charged, NiOOH is in contact with the electrolyte. Normally, NiOOH isn't in contact with the electrolyte. I don't quite understand how that works, but it's what I've read in a number of locations. NiOOH is a "mixed-conductor," it can pass protons or electrons. Ni(OH)2, on the other hand, is not electronically conducting. So, usually or normally, Ni(OH)2 is a barrier between NiOOH and the electrolyte, that is, until the electrode is fully charged and NiOOH comes into contact with the electrolyte.

You can probably instantly see what's going on with top-end self discharge: NiOOH comes into contact with the electrolyte, reactions take place between water and the NiOOH, resulting in hydrogen being added to the electrode, converting NiOOH to Ni(OH)2. This is self discharge. With the addition of protons to the electrode, the potential drops and thus cell voltage drops... I imagine that as more and more NiOOH is converted to Ni(OH)2, Ni(OH)2 increasingly functions as a barrier layer and self discharge slows down.

In conjunction with my previous post, with 'moving window theory of charge state' - I'm thinking it's probably not a cut and dried thing, where this process can only happen at an absolute full charge; rather, I imagine it can happen whenever voltage is appreciably high, say, above the equilibrium voltage. I.e, I'm thinking this process can happen at the top of the moving charge state window.
 
#20 ·
One caveat to the idea that discharging a bit 'off the top' might help mitigate uneven self-discharge: it depends on the differential impact of the high charge state circumstance on self discharge rates. I know, that's a mouthful...

I'm looking at data that show cell level self discharge rates. Rates are fast at high charge state and gradually slow down. The few cells with faster self discharge seem to have faster self discharge only at the high charge state. After some days, the self discharge rates for all the cells tend to even-out. In other words, these data show faster self discharge for a few cells vs. the rest only when the charge state is high. Thus, when you discharge off the top you bring all cells down to a charge state where self discharge rates are more even.

But, I'm thinking that in general it's possible or likely that top-end self discharge, the mechanisms for which I explained in the previous post, isn't the same as the self discharge that can make one cell's SD faster than another. 'Faster self discharge' might be a function of different mechanisms. For instance, all cells will have some base level of SD due to NiOOH contacting water and all that; on top of that, though, some cells might have a different mechanism of self discharge, say a tiny micro short or something... If you discharge a bit off the top to thwart uneven self discharge, maybe you only end up thwarting fast self discharge at the top, but not the unevenness. It only works if the SD rates are different, uneven at the top but more even lower down. That's what I see in this dataset, but I couldn't say whether that'd be the case typically.
 
#21 ·
Just a quick follow-up on what I was writing about above, and then on to something else I want to mention...

I ended up taking a closer look at the self discharge data I mentioned above, and overall it's a bit too inconclusive, not good enough. I posted a couple graphs based on those data in another thread and explained it a bit there: The quintessential Insight NiMH voltage thread

The gist of it is this: Discharging a bit off the top will mitigate the impact of uneven self discharge rates on cell-to-cell balance only if the faster self discharge cells tend to self discharge even faster at higher charge states/voltages. I just can't conclude that that's the case from the weak dataset I have, and I don't care enough about it to create better data.

* * *

So, since I got back from the Danville Insight meet I've more or less switched to using my pack in the top charge state range, so I've been looking closer at how the car deals with the top. There's really a lot of interesting stuff going on, a lot of things to talk about. For now I just want to mention a couple things about the high charge state cutoff 'algorithms' in play. Mainly, there must be a quite refined program going on in one or more of the computers that makes sure you're not overcharging a cell, stick-pair, pack - whatever...

I think it was here or maybe in another thread where I talked a lot about the low-end slope detection algorithm in play: the BCM is able to detect when a cell is empty by calculating the slope of stick-pair voltage discharge curves - a steep slope reflects an empty cell and the BCM (or MCM) throttles discharge current and then throws a neg recal on the second detection of steep slope. I mentioned at the time that I thought it was likely something similar plays out at the top end, to determine full or too-full... Can't say I know that's what happens, but definitely something similar and very iterative does play out.

I've been resetting state of charge with the OBDIIC&C and more or less trying to stuff the pack, teetering at the edge of when this high cutoff algorithm kicks in. I noticed a handful of things today that I've either never noticed or never really thought too much about. Here's a couple of them off the top of my head:

-at some juncture during the sequence of 'full detection' parameters, one of the computers commands a discharge, the 12V load will be sourced directly from the IMA pack rather than the motor.

This is a really weird little program. I can't tell exactly what triggers it, and then it usually only lasts until I 'press the clutch pedal', i.e. I can trigger my calpod switch, ON then OFF quickly, and that will disable this drain.

Sometimes when this drain is happening, subsequent regen will trigger the dash CHRG lights - but OBDIIC&C shows no current. So, it's like the BCM or ECM triggers that discharge/drain, but perhaps the MCM doesn't get the message(?) - whatever drives the dash regen lights, that's still acting like everything's normal...

This drain seems to be an 'afterthought': it's not the main/first high cutoff behavior, it seems to happen only after 'something else' happens first. For instance, maybe an initial high tap voltage or steep slope is detected and regen current is throttled/limited. But then, perhaps a high tap resting voltage is detected, perhaps for a set duration, and then the drain will kick in... I know that the drain will normally kick in once the nominal charge state reaches its normal set max, such as 81%. But if you're manipulating the system, such as by resetting nominal charge state from 80% to 75%, this normal trigger is defeated, and the other real-time monitored parameters or what-not come into play, are revealed, etc...

-It looks like the absolute cutoff is resting voltage of 17.4V tap level, (1.45V per cell), or equivalent.

I can watch total pack voltage and current during regen and see that the pack itself isn't quite full; typically I'm keeping an eye out for about 186V at about 6.5 amps as an indicator of truly full. I've gotten closer, but still quite far away from that. I think I've seen maybe 180V at maybe 10 amps. But the highest resting voltage I've seen is about 174V, and usually I haven't been able to get that to 'stick' for long; 174V for maybe 30 seconds, and then a more stable 172-173V. At this point the 'car' is not allowing any more charge.

You can charge substantially more than the car would normally allow if you just work the system a little. For example, with one of the BCMs that pos recal to 75%, you can charge from 75% to 80%, reset with OBDIIC&C to 75%, charge another 5%, reset again to 75%, and so forth. But then, once you start seeing the 'automatic drain', you can discharge just a little to bring voltage down, yet then charge even more than you discharged without triggering the absolute top end cutoffs.

It all seems precariously designed around tap voltages - voltages that suffer a ton of hysteresis. I'm pretty sure OEM Insight cells, and probably Civic cells, can exhibit a fairly large degree of voltage hysteresis at the top end (i.e. the voltage can vary a lot), but that variation is due to short-lived, temporary, reversible electro-chemical phenomena. Does the BCM adequately deal with this? I don't think it does.

It seems like the BCM must have fixed voltage thresholds (probably adjusted for current and temperature), and once a tap hits the threshold, or once a steep slope is detected, charging is done. But subsequent usage around that charge state can 'loosen' things up: similar to how low charge state usage can raise sagging voltages, high charge state usage can lower peaky voltages... After this 'loosening', you can charge more while staying under the absolute cutoffs...

Personally, with my pack in its current state, I'm thinking a lot of this extra charge I'm able to do probably stems from me having used the pack at rock bottom charge state for the last month or so. I did cycle up some times during that low end usage, but most usage has remained low. I don't really have the greatest data, but after that low charge state usage, the first time I cycled up I was able to charge the pack to an adjusted, estimated real charge state of only about 30%, i.e. the car pos recal-ed at what I estimate to be only about 30% true charge state - not the '75%' you'd expect. That was like two weeks ago. Since then I've concentrated usage toward the 'high' end (above this real 30%, and usually as high as I could go) and now my estimated adjusted true charge state figure is at 67% (that's probably a slight under-estimate, though)...

In other words, two weeks ago I could charge the pack to 30%, now I can charge the pack to 67%. Pretty sure this would never happen were I not juking the system. I'm not sure if grid charging and discharging would accomplish the same thing... It wouldn't if the treatment intervals were too far apart - more than 6 months? 3 months? I imagine that would depend on the condition of the cells.
 
#22 ·
Was looking a little more at the top-end cutoff behavior today, particularly that automatic discharge/drain. It seems like it has to be, or at least can be, triggered by something that's transient, short-lived and/or the 'drain command' itself is just a one-shot deal, like the command says, 'discharge until you see XXX', "XXX" being, for example, clutch pedal press or probably any subsequent IMA usage, like assist or regen, after which it resets and looks for the trigger parameter again, whatever it is...

If I had to guess I'd say it's probably tap voltage slope: when I do coasting regen and it's a modest rate, say 7-12 amps, total voltage might be at like 175-180V, but voltage on a single tap is probably increasing faster than others (i.e. it's more charged, or at least one cell in the stick pair is). I can do this repeatedly and the drain is invoked almost in lock-step with whatever I'm doing. Hard to explain.

I can gauge just how close the pack is getting to 'full' and I can see just how much I'm able to input, and I get a feel for when the drain is going to trigger, and the pace and rhythm has the feel of a cell reaching full, as if I were charging a cell on the bench. Charge slope near full gets steeper and steeper (that is, until it peaks). So, the sense I get is that slope is being measured under the slight charge load, and once it reaches the set threshold, the drain kicks-in. But, if I disable the drain such as by hitting the calpod and try to trigger it again, it will happen again - sooner. If I discharge a little and repeat the process, the drain takes a bit to kick in. I disable the drain, try to trigger again, it happens sooner. And again and it happens sooner, etc etc...
 
#23 ·
....Charge slope near full gets steeper and steeper (that is, until it peaks). So, the sense I get is that slope is being measured under the slight charge load, and once it reaches the set threshold, the drain kicks-in. But....
This isn't exactly true. Charge slope gets steeper and steeper, but only up to a point, after which it gets shallower and shallower until the cell is full, voltage peaks and flattens-out, and then falls if charging continues. IF the BCM uses slope detection at the top, I imagine it would use the steeper and steeper part of the curve, but it seems like it could use the shallower and shallower part - either also or in addition to the steeper and steeper part. For example, perhaps it detects when the voltage curve gets steeper and stepper and implements some controlling behavior, such as regen throttling. But after that, perhaps under the right conditions, it also detects when the voltage curve gets shallower and shallower - and implements more aggressive regen throttling or outright disabling... I haven't really seen absolute disabling, not when nominal charge state hasn't reached 81%...

In any event, was looking a bit at regen limit flag "Rlf" on OBDIIC&C today, while doing stuff similar to what's described in previous post. In general, Rlf reads 0 when regen isn't throttled and 1 when it is. I said above that if I had to guess I'd say slope was being detected and triggers throttling and/or drain. But watching Rlf it almost seems like there could be multiple criteria at play. For example, most often I would see Rlf trigger 1 only after the regen event was over - do some coasting or braking regen and when back on throttle Rlf flips to 1. You'd think that if slope were the controlling parameter that Rlf would flip 1 during the regen, not after, wouldn't we?... Flipping 1 after the regen makes it seem more like open circuit voltage/resting voltage is in play. But, I also saw Rlf flip to 1 during regen, so perhaps both slope and resting voltage, or even max voltage loaded - they all could be in play...

Rlf doesn't stay at 1 (under these circumstances; as I recall it does if you let nominal SoC max-out and auto drain is locked in). It only flips to 1 for brief periods. However, throttling/drain behavior sticks around even though Rlf isn't 1. That makes me think indeed a timer is in play - time and/or a cancelling behavior, such as assist or depressing the clutch (turning 'calpod' on and off)...

If you have a BCM that pos recals to 75% rather than ~81%, it seems like you have a lot more control of how much more you can charge the pack at the top. Resetting SoC from say 80% to 75% and then stuffing it some more allows way more charge than resetting low repeatedly and letting the car pos recal on its own. I can reset low and get pos recal say a couple times, but after that additional resets low don't allow any more charge, pos recal happens right away. But, continuing to regen above 75%, taking it up to 80%, resetting back down to 75% and repeating the process, juking the BCM's top-end algorithms, can stuff a ton more into the pack. I've added something like 20% just over the past two drives...

I have one tap that's about 50-100mV higher than the others, when loaded at around 6 amps (say 17.10V vs. 17-17.05V). I think that's due to persistent, 'hard-core' high IR (rather than high resistance due to electrochemical stuff). I imagine that tap might cause 'premature pos recal' or 'premature full'. Now, I think it would cause premature full or whatever if the high cutoff parameter were high loaded voltage, possibly high resting voltage. But I don't think it would if the parameter were slope, steep or otherwise: I'm pretty sure 'high IR' just shifts the curve higher (or lower on discharge), but it doesn't change the contour, in general. Normally I'd think this tap with a high IR cell (or 2) would have a higher voltage spike at the top under charge load, but the voltage would drop lower than other taps when the load was removed. Yet, these NiMH cells, the ones that have persistent high IR, seem like their voltages can kind of get stuck up there, they don't fall lower... I don't really know why this happens, how that works... So, persistent high IR = premature full if trigger parameters are loaded voltage and resting voltage, I guess, but probably not steep slope...
 
#24 ·
Maybe not the right thread but you are the NIMH guru.

Modelling packs..


Could we model a packs behaviour in a sophisticated spreadsheet with nice graphs as we step thru data or time?

I'm rubbish with spreadsheets so this is out of my league.

But i'm thinking build a spreadsheet with 20 sticks data, each assigned a value for capacity, voltage, IR and most importantly self discharge rate.

The spread sheet allows you input nominal soc as start point, then it plots daily stick soc etc and calculates pack imbalance and whatever else we fancy looking at for say however many day/weeks/months we want. ..

A bit like weather modeling. We could predict how far out of balance a pack would be after 20/30/90 days etc

If you quantified a set of 20 sticks accurately and input that data into the model we could see how it would react to charing/discharging/cycling etc etc.

Depending on how clever the spreadsheet is we could add peukert effect and natural balancing due to efficiencies etc etc etc.

A shared google sheet might be best as several people could work on it.
If you think this should have another thread that's fine.

We might be able to determine if self discharge is linked closely and predictably with Internal resistance or other factors like terminal voltage etc.

Meaning that easy to determine IR could be used to predict or model SD without having to actually measure the stick SD rate..

Just lockdown thinking..
 
#25 ·
^ hmm, I think I understand everything, and it makes sense, up to the last couple lines. Pretty sure self discharge isn't correlated with 'IR', or if it is it's a negative correlation ('high IR' correlated with slower self discharge)...

In general, are you talking about a theoretical model, where you plug-in fictitious values, where the overall model is simply used to inform, educate, or a real one, where you plug-in real values and try to, say, diagnose a pack? Sounds like you're talking mainly about the former most of the time... I think a very simple model might be doable, and might be useful, though I doubt anyone would be interested.

I think anything that tries to get too 'real-world' would get too complicated and would probably end up missing the forest for the trees...

Even just modelling 20 sticks (perhaps 10 taps) with a couple variables, like capacity and self discharge, within the confines of management that has only a couple parameters, like top and bottom voltage cutoffs, over time, would require a lot of rows and columns and calcs... hmm, I don't think I know enough to 'model' even a single cell...

I don't think I'd be up for it. I barely enjoy 'modelling' my whole pack with the simplest of data and calcs, let alone trying to do 20 sticks with added complexity. I can't think of anyone else around here who would care enough to participate... I don't think anyone even looks at the simple charts I've done, which aim to elucidate relatively simple (though fundamental) concepts; I imagine it'd be even more...oppressive for people to 'look at' a 20 stick model...
 
#26 ·
Took my pack back down to the bottom today and watched assist limit flag ('Alf'), still on 010 BCM. A couple things I noticed:

-assist gets throttled when nominal charge state gets low, I think it was around 30%, but I wasn't seeing the assist limit flag.

-I reset nominal SoC from around 28% to 40%. As mentioned earlier, with 305 BCM I can do that and defeat throttling, but with 010 BCM I wasn't able to do that, last time. Now I was able to do that. But, it only lasted until Alf triggered, which wasn't too long after...

It looks like Alf is triggered by tap voltage slope detection (near empty cell), like every time a steep slope is detected Alf flips to 1 (and back to zero). And every time Alf flips to 1 assist is immediately throttled. With the 010 BCM I can charge back up a little, like 5-10%, and then use assist freely as long as nominal SoC is set artificially high (such as 40% rather than 28%) and Alf isn't triggered. With 305 BCM I'd be allowed two steep slope detections, the first one at whatever current, the second one low current, and then a neg recal.

* * *
One other thing I've been noticing in general, and something I'm starting to believe, is that my pack doesn't perform as good when it's charged high and used high versus when I use it low. I had it charged to probably a true 75%, but it didn't take much assist, maybe 10-20% of capacity, for total voltage to look pretty saggy. Like, after about only 10-20% usage, total voltage was below 144V at about 20 amps. When I've used the pack low, I've seen higher voltages longer and typically not below about 144V until close to the bitter end. Of course, if I'm using it low I'm closer to the bitter end to begin with. But, I know I can get more than say 10-20% of usable capacity at loftier voltages, I'm thinking it's more like 35-45%...

This is pretty hard to explain. Earlier I had been thinking that low-end usage restores 'curves', restores performance, across a whole absolute charge state range. I can use the pack low and see performance down low improve, and I've generally assumed that improvement at the low end meant improvement at the top as well, at the same time, that they go hand in hand. But over time I've come to see that that's not the case.

I'm pretty sure that low end works better than high end, and I'm not sure why that's the case. But even so, a trade-off is happening: low-end performance comes at the expense of high end performance, and vice versa...

Basically, there's like two things happening:

1. When you use low end it's like you drag active materials from the top down to the low end. You see performance gains down low. If you could miraculously, instantaneously transport your usage to the top, however, you would NOT see great performance - there really is no top end any more. You've dragged stuff from the top down low, so as long as you're seeing good performance down low you can't see good performance up high, it's a trade-off.

Of course, you can't just instantaneously go from low usage to high usage; you have to charge back up. The first few charges you won't see good performance up high, it takes several cycles to start seeing good performance up high after you've been using the pack low. Why? Because you're now dragging active materials back up to the 'top', the cycling up high is re-concentrating active materials 'up there'... The same or similar things happen when you go from top usage to bottom usage...

2. When you use low end you probably recondition 'stuff' down there. I think this is a different, distinct process. For example, you burn bad stuff - 'crud', shrink crystals, probably achieve some kind of balancing among cells in the process, etc etc. This isn't the same thing as the 'dragging stuff from high' type of recondition or what have you...

From what I can tell, this number 2 happens early-on, and then you're stuck with the finite, stunted capacity of used, old cells, and all you can do is drag top end down or drag bottom end up, achieving good performance within that window, but never achieving that good performance across the entire 6.5Ah capacity (or maybe 5.2 Ah capacity, as I imagine even new cells can't do this "good" performance, such as 90 amps for 4 seconds or 45 amps indefinitely, across the entire 6.5Ah range)...

So, the question now is, despite being a trade-off, where you can have good performance low or high but not both at the same time, the low end still seems to work better -- Why? Just not something I understand.

Falling back on old, boiler-plate ideas, it seems possible that charging at high currents at relatively high charge states can quickly 'crud-up' the cells -- inducing 'voltage depression'... In general it seems like high charge state is a stressed-state by default, the cell is wound-up. That would be bad for charges, but you'd think it'd be good for discharges. It just doesn't seem to work out that way... Maybe it's a combination: high charge state is stressed, so charging stresses even more, crudding things up, and then discharges end up weak - because the 'crud' gets in the way. But when you charge down low, the cell isn't in such a stressed state, so crud creation is minimal - discharges end up good...
 
#27 ·
More thoughts on 010 vs. 305 BCM low charge state slope detection and basically other possible 'battery empty' indicators:

I'm thinking it'd be possible or likely that different BCMs have different values for steepness of slope. In this case I'm thinking the 010 might have a shallower slope value, as it seems to trigger sooner, faster, and is allowed to trigger repeatedly compared to the 305. The 305 might have a steeper slope value that's only allowed to trigger twice before disable (neg recal, 'battery empty'); the 010 might have a shallower value that's allowed to trigger over and over, until something else finally signals 'battery empty'. I still haven't gotten a neg recal though with the 010 this time around...

One of the differences I suggested earlier was that the 305 throttles assist strictly due to nominal charge state, but that can be easily circumvented by resetting charge state artificially high, whereas the 010 seems more difficult to do that. But, it might be the other things that end up making it difficult to do that, not the nominal charge state per se. With just a little charging (5-10%), 010 BCM, nominal charge state reset from 28% to 40% yesterday and held over to today, I see no problem freely using assist, that is, until 'Alf' triggers. And even then the throttling isn't immediately heavy. So, it appears that maybe multiple Alf triggers alone end up resulting in heavy throttling, not a low nominal charge state nor say a net amp-hour count that underlies a more real value for charge state...

In other words, when it comes to low end charge state/empty behavior of 010 vs. 305, they appear to be more similar than I was suggesting earlier, where the only major difference might be this multiple Alf trigger, perhaps different slope value idea.
 
#31 ·
Continuing with the whole 'Alf'/empty behavior line of questioning, I'm seeing things with this 010 BCM that complicate my interpretations to-date of 'what's going on'. I saw at least a couple things today that just don't square...

-In auto-stop, ~1 amp discharge load, near empty, I got a neg recal but never once saw Alf trigger. I was saying earlier that Alf must trigger in response to slope detection. And I also have said that neg recal is a response to slope detection. So, how can both of these be true when I get a neg recal (due to slope detection?) yet never see Alf trigger? In theory, Alf should trigger first, and neg recal should happen shortly thereafter (under load)...

-I had a tap below 14V, well below, at 13.53V, but did not get DCDC disable: Earlier I had said that DCDC must disable when a tap drops below 14V resting or perhaps near resting. But I didn't get DCDC disable. Total voltage was relatively high, above 144V.

I just can't figure out what the logic is here, with Alf, throttling, neg recal, and DCDC disable. It seemed pretty clear, cut and dried with the 305 BCM. But there's something different with this 010 that's mucking things up.
 
#32 · (Edited)
More, but different, observations...

Generally the same test setup as earlier - auto-stop, DCDC active about 1.2A load, pack on verge of being empty, try to measure tap voltages.

This time, the first time I pulled into my garage and had the car in auto-stop, I lost DCDC very quickly -- DCDC disables, 12V load switches to 12V battery only, and the car doesn't come out of auto-stop. So I had to re-start, back-up, and get it back into auto-stop with DCDC active. I never saw Alf (I could've missed it) nor got a neg recal - just DCDC disable.

This would be relevant to a couple threads I've seen recently about losing auto-stop and 12V warning light...

When I finally got it re-engaged, I decided to try something different. I measured tap voltages under load (~1.2A), but then decided to up the load by turning headlights ON, high beams, etc. I turned headlights ON, then high beams, and the moment I turned high beams ON I saw Alf flip to 1 and the DCDC disabled. No neg recal. Total pack voltage was above 144V...

Not sure what to make of it. Obviously the increased load dropped tap voltages, and it seems clear that, given the high total pack voltage, Alf is responding to tap voltages. The question is whether there's an absolute low voltage threshold that gets crossed when Alf triggers, or if there's a slope calculation taking place. I can't really tell.

The way Alf triggers under a constant load - and the way the MCM must be responding with assist throttling, at least with this 010 BCM, makes me think it goes something like this:

I've seen this now over several trials, where with a near empty pack (but total voltage above 144V) Alf flips 1 then back to zero, and then seconds later it flips 1 again, and so on, as long as the load is held. Each time Alf flips 1 assist is throttled, more. Watching total voltage, seeing the way each throttling event results in total voltage essentially holding steady, I'd have to say that there's a constant slope detection going on, that when the BCM measures a too-steep slope Alf flips 1 and assist is throttled so as to prevent the falling tap from falling too much, too fast.

I don't think it can be an absolute voltage threshold, though, as knowing what I know about the condition of sticks underlying my taps, I've got single cells that drop out early, so total tap voltage remains relatively high, it's just the steepness of the discharge slope that's being detected... When the steepness is detected, assist is throttled/load decreased, tap voltage stops falling as fast. But of course if the single cell is truly near empty, voltage will continue to fall, so as it does the BCM detects the steepness again, Alf flips 1, and assist is throttled even more...

What's 'interesting' with this 010 BCM is that even though I'll see many successive triggers of Alf, I don't see neg recals, not for a while. That's different from the 305. One thing that occurred to me, yesterday, is that this 010 was a discontinued model, apparently because of a 'cold discharging' bug - supposedly, this BCM will allow too much power draw under cold conditions. It makes me wonder whether what I see here, with successive Alf triggers yet no neg recal, no disable, might be related...

In any event, I think I might go back to the 305 BCM. Its 'empty behavior' just seems more cut and dried, easy to predict, easy to work with. With the 010 it seems like there's 'things' at work that might be at cross-purposes with one another, like there's a few different things going on that dictate 'empty behavior', yet they're not all on the same page... As I mentioned earlier, this 010 is a very low serial number computer, probably one of the first. I can imagine that Honda still had some programming bugs to work out when it was released. I can see how a threshold for one algorithm, say one responsible for 'catching empty cells' and/or 'throttling assist', might not be perfectly in sync with a threshold for another algorithm, perhaps one meant to disable the DCDC to reduce the load on the pack, etc. Seems like it'd be easy to not get 'it all' working together just right -- so you can end up with a DCDC disable, yet no neg recal; a neg recal yet no Alf trigger; an Alf trigger yet no forced-charge (I've been seeing that, too); and on and on...
 
#33 ·
Still thinking about 'empty pack' behavior. Want to document one thing quickly before I forget it, again.

It's actually been kicking around IC for a long time -- the idea that most 'throttling' happens when a tap drops below 13.2V, or 1.1V per cell (1.1V per cell conforms to Panasonic's formula for appropriate discharge level with 12 cells in series). I was reminded of that today when I saw Alf flip 1 and was watching for the absolute voltage threshold, albeit at the pack level.

I recalled an old tap voltage graph that Eli made a long time ago, which shows one tap dropping a little lower than others and current-throttling every time that tap's voltage dropped to about 13.2V. It's not totally cut and dried, as the initial threshold I believe is 12V, and then subsequent throttling is a bit above 13.2V. But shortly thereafter the value does look like 13.2 volts... hmm, but it's not exactly 13.2, it's not exactly the same at each throttling event. The thing is, the slope for the one tap presumably causing throttling doesn't look any steeper than slopes for the other taps -- so it doesn't look like it could be slope detection.

Here's that graph:
85227



In any event, perhaps Alf does have an absolute threshold - about 13.2V. I don't see Alf trigger when I'm discharging the pack in auto-stop, at a low current, because the current is so low, the tap simply never drops below 13.2V. If this is the case, then it means neg recals have a different trigger criterion: Alf might be an absolute voltage, such as 13.2V, and neg recal is slope detection, perhaps among other things.

Maybe it's not exactly 13.2V, 1.1V per cell, but somewhere around that -- depending on what the nominal charge state is. At higher charge states the value is a little higher, and once charge state goes lower the value is lower - 13.2... I imagine there could be different enabling criteria across the whole nominal charge state range, and there does appear to be enabling criteria in terms of current/load, as if you mash the throttle and hold it there I think the Alf/throttling voltage threshold is 12V, not 13.2V... But, I've been mainly trying to hold that stuff constant, focusing on only what happens when artificially circumventing nominal charge state throttling (by resetting SoC high with OBDIIC&C) and by looking at 'Alf' and neg recal only under particular circumstances - at modest <25 amp load and in auto-stop, where load is only about 1 amp...
 
#34 ·
Just came back here to write something that's akin to what's explained in the above post, not remembering exactly that I already explained it. Basically, I'm pretty sure this is what's going on: Assist limiting (Alf), the real-time variety, is probably a response to absolute low tap voltage of around 13.2V. Neg recal is slope detection.

A loaded low voltage roughly equal to 1.1V per cell would be the time to implement throttling. At different discharge rates and with different pack conditions you could end up with throttling all over the board. For instance:

-at higher loads, a pack or tap with 'high IR' would see voltages plummeting - and throttling would kick-in, even if the pack were at a high charge state.

-Or, you could have an imbalanced pack, where some taps are at say 60% and others are at say 20%, the total pack voltage remains high but even a slight discharge load will drop one tap below 13.2V...

-Or, you could have a voltage depressed pack where at middling charge states even low loads drop a tap's voltage below 13.2V...

BUT, none of this means the pack is empty, not yet. The voltage depression and 'high IR' really complicate potential interpretations of what '13.2V' means. A pack or taps can be charged a lot, but high IR or voltage depression result in saggy voltages - and throttling.

The pack is empty when a cell drops out - that's slope detection. I mean, maybe most cells aren't empty, but with one empty cell the pack is done.

You can have a tap - 12 cells - where 11 cells are say 50% charged and 1 is near empty. The total voltage will hum along at a modest load at a fairly high value. The normal loaded voltage is about 1.25V per cell, so 11 X 1.25V = 13.75V -- well above the Alf trigger point. And we're not even counting the near empty cell.

But being near empty, that cell's voltage is going to tank asap, and once it does, the slope gets steeper, the BCM detects that and - boom - neg recal. And the clincher here is that you would never see Alf trigger - because the tap's voltage never fell below ~13.2V, the 11 remaining cells uphold a voltage well above that level...

Ideally - and something I've been seeing - is you see Alf and neg recal at virtually the same time. Cell voltages are tanking and total tap voltage is below 13.2V... Everything's pretty well matched and balanced.